To see the other types of publications on this topic, follow the link: Large transactional data.

Journal articles on the topic 'Large transactional data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Large transactional data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rambe, Patient, and Johan Bester. "Financial cost implications of inaccurate extraction of transactional data in large African power distribution utility." Problems and Perspectives in Management 14, no. 4 (2016): 112–23. http://dx.doi.org/10.21511/ppm.14(4).2016.14.

Full text
Abstract:
In view of the increasingly competitive business world, prudent spending and cost recovery have become the driving force for the optimal performance of large public organizations. This study, therefore, examined the cost-effectiveness of a Large Energy Utility (LEU) in a Southern African country by exploring the relationship between extraction of transactional customer data (that is, data on the servicing and repairing energy faults) and the Utility’s recurrent expenditure (especially its technicians’ overtime bill). Using data mining, a large corpus of the LEU Area Centre (AC) data was extracted to establish the relationship between transactional customer data extraction including capture and the financial cost of the LEU (e.g., recurrent expenditure on overtime bill). Results indicate that incorrect extraction and capturing of transactional customer service data has contributed significantly to the LEU’s escalating overtime wage bill. The data also demonstrate that the correct extraction and capturing of transactional customer service data can positively reduce the financial costs of this LEU. The paper demonstrates one of the few attempts to examine the effects of correct data extraction and capture on the financial resources of struggling large public energy utility. Using Resource Based Theory, the study also demonstrates how technicians’ feedback on incorrect transactions enhances the measurement of inaccurate transactional data albeit a burgeoning overtime wage bill incentives. Keywords: Large Energy Utility, inaccurate transactional data extraction, financial costs, Resource Based View. JEL Classification: L94, L97, C8
APA, Harvard, Vancouver, ISO, and other styles
2

Abdurashitova, Muniskhon. "DATA MODELS AND ARCHITECTURES IN DATA WAREHOUSING." RESEARCH AND EDUCATION 3, no. 3 (2024): 21–25. https://doi.org/10.5281/zenodo.10897094.

Full text
Abstract:
<em>The separation between transactional computing and data analysis ensures that the transactional systems responsible for processing and recording business transactions can operate efficiently and without interruption. Meanwhile, data analysis systems can focus on processing and analyzing large amounts of data to extract valuable insights that can inform business decisions. By employing different data warehousing architectures and models, organizations can separate these two functions, preventing the resource-intensive data analysis processes from interfering with the smooth functioning of the transactional systems.</em>
APA, Harvard, Vancouver, ISO, and other styles
3

Aljojo, Nahla. "Examining Heterogeneity Structured on a Large Data Volume with Minimal Incompleteness." ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY 9, no. 2 (2021): 30–37. http://dx.doi.org/10.14500/aro.10857.

Full text
Abstract:
While Big Data analytics can provide a variety of benefits, processing heterogeneous data comes with its own set of limitations. A transaction pattern must be studied independently while working with Bitcoin data, this study examines twitter data related to Bitcoin and investigate communications pattern on bitcoin transactional tweet. Using the hashtags #Bitcoin or #BTC on Twitter, a vast amount of data was gathered, which was mined to uncover a pattern that everyone either (speculators, teaches, or the stakeholders) uses on Twitter to discuss Bitcoin transactions. This aim is to determine the direction of Bitcoin transaction tweets based on historical data. As a result, this research proposes using Big Data analytics to track Bitcoin transaction communications in tweets in order to discover a pattern. Hadoop platform MapReduce was used. The finding indicate that In the map step of the procedure, Hadoop's tokenize the dataset and parse them to the mapper where thirteen patterns were established and reduced to three patterns using the attributes previously stored data in the Hadoop context, one of which is the Emoji data that was left out in previous research discussions, but the text is only one piece of the puzzle on bitcoin transaction interaction, and the key part of it is “No certainty, only possibilities” in Bitcoin transactions
APA, Harvard, Vancouver, ISO, and other styles
4

Rao, M. Venkata Krishna, Ch Suresh, K. Kamakshaiah, and M. Ravikanth. "Prototype Analysis for Business Intelligence Utilization in Data Mining Analysis." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 7 (2017): 30. http://dx.doi.org/10.23956/ijarcsse.v7i7.93.

Full text
Abstract:
Tremendous increase of high availability of more disparate data sources than ever before have raised difficulties in simplifying frequent utility report across multiple transaction systems apart from an integration of large historical data. It is main focusing concept in data exploration with high transactional data systems in real time data processing. This problem mainly occurs in data warehouses and other data storage proceedings in Business Intelligence (BI) for knowledge management and business resource planning. In this phenomenon, BI consists software construction of data warehouse query processing in report generation of high utility data mining in transactional data systems. The growth of a huge voluminous data in the real world is posing challenges to the research and business community for effective data analysis and predictions. In this paper, we analyze different data mining techniques and methods for Business Intelligence in data analysis of transactional databases. For that, we discuss what the key issues are by performing in-depth analysis of business data which includes database applications in transaction data source system analysis. We also discuss different integrated techniques in data analysis in business operational process for feasible solutions in business intelligence
APA, Harvard, Vancouver, ISO, and other styles
5

Tomić, Nenad, and Violeta Todorović. "The influence of Big data concept on future tendencies in payment systems." Megatrend revija 17, no. 3 (2020): 115–30. http://dx.doi.org/10.5937/megrev2003115t.

Full text
Abstract:
The new wave of information and communication technology transformation relies on the concepts of the Internet of Things, Big Data and machine learning. These concepts will enable the connection and independent communication of a large number of devices, the processing of data that arises as a result of these processes and learning based on the refined information. Payment system is a sector that will experience major impacts by the coming changes. A large number of transactions create an information basis, whose analysis can provide precise inputs for business decision making. The subject of paper is the impact of managing a large amount of transactional data on key stakeholders in the payment process. The aim of the paper is to identify the key advantages and dangers that the Big Data concept will bring to the payment industry. The general conclusion is that the use of Big Data tools can facilitate the timely distribution of payment services and increase the security of transactions, but the price in the form of a loss of privacy is extremely high.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Hongzhi, Changji Li, Chenguang Zheng, et al. "G-tran." Proceedings of the VLDB Endowment 15, no. 11 (2022): 2545–58. http://dx.doi.org/10.14778/3551793.3551813.

Full text
Abstract:
Graph transaction processing poses unique challenges such as random data access due to the irregularity of graph structures, low throughput and high abort rate due to the relatively large read/write sets in graph transactions. To address these challenges, we present G-Tran, a remote direct memory access (RDMA)-enabled distributed in-memory graph database with serializable and snapshot isolation support. First, we propose a graph-native data store to achieve good data locality and fast data access for transactional updates and queries. Second, G-Tran adopts a fully decentralized architecture that leverages RDMA to process distributed transactions with the massively parallel processing (MPP) model, which can achieve high performance by utilizing all computing resources. In addition, we propose a new multi-version optimistic concurrency control (MV-OCC) protocol with two optimizations to address the issue of large read/write sets in graph transactions. Extensive experiments show that G-Tran achieves competitive performance compared with other popular graph databases on benchmark workloads.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Xiangwen, Xia Feng, and Yuquan Zhu. "Transactional Data Anonymization for Privacy and Information Preservation via Disassociation and Local Suppression." Symmetry 14, no. 3 (2022): 472. http://dx.doi.org/10.3390/sym14030472.

Full text
Abstract:
Ubiquitous devices in IoT-based environments create a large amount of transactional data on daily personal behaviors. Releasing these data across various platforms and applications for data mining can create tremendous opportunities for knowledge-based decision making. However, solid guarantees on the risk of re-identification are required to make these data broadly available. Disassociation is a popular method for transactional data anonymization against re-identification attacks in privacy-preserving data publishing. The anonymization algorithm of disassociation is performed in parallel, suitable for the asymmetric paralleled data process in IoT where the nodes have limited computation power and storage space. However, the anonymization algorithm of disassociation is based on the global recoding mode to achieve transactional data km -anonymization, which leads to a loss of combinations of items in transactional datasets, thus decreasing the data quality of the published transactions. To address the issue, we propose a novel vertical partition strategy in this paper. By employing local suppression and global partition, we first eliminate the itemsets which violate km-anonymity to construct the first km-anonymous record chunk. Then, by the processes of itemset creating and reducing, we recombine the globally partitioned items from the first record chunk to construct remaining km-anonymous record chunks. The experiments illustrate that our scheme can retain more association between items in the dataset, which improves the utility of published data.
APA, Harvard, Vancouver, ISO, and other styles
8

Yash, Jani. "Strategies for Seamless Data Migration in Large-Scale Enterprise Systems." Journal of Scientific and Engineering Research 6, no. 12 (2019): 285–90. https://doi.org/10.5281/zenodo.13347837.

Full text
Abstract:
Data migration is a critical process in the evolution of enterprise systems, particularly when transitioning from traditional relational databases like SQL Oracle to NoSQL solutions like MongoDB. This paper explores comprehensive strategies for successful data migration in large-scale environments, emphasizing meticulous planning, efficient execution, and thorough post-migration verification. Drawing on practical experiences, this study provides actionable insights and best practices to mitigate risks and ensure data integrity during migration, with a focus on the limitations and challenges of migrating transactional data to NoSQL databases [1][2][3].
APA, Harvard, Vancouver, ISO, and other styles
9

Mr., Dheeraj Bhimrao Lokhande, and Dr.R.C.Thool Prof. "A NOVEL APPROACH FOR TRANSACTION MANAGEMENT IN HETEROGENEOUS DISTRIBUTED DATABASE SYSTEMS." International Journal of Engineering Sciences & Research Technology 5, no. 3 (2016): 64–71. https://doi.org/10.5281/zenodo.46993.

Full text
Abstract:
RESTful APIs are widely adopted in designing components that are combined to form web information systems. The use of REST is growing with the inclusion of smart devices and the Internet of Things, within the scope of web information systems, along with large-scale distributed NoSQL data stores and other web-based and cloud-hosted services. There is an important subclass of web information systems and distributed applications which would benefit from stronger transactional support, as typically found in traditional enterprise systems. In this paper, we propose REST with Transactions, a transactional RESTful data access protocol and API that extends HTTP to provide multi-item transactional access to data and state information across heterogeneous systems. We describe a case study called Tora, where we provide access through REST+T to an existing key-value store (Wired Tiger) that was intended for embedded operation.
APA, Harvard, Vancouver, ISO, and other styles
10

Cheng, Audrey, Xiao Shi, Lu Pan, et al. "RAMP-TAO." Proceedings of the VLDB Endowment 14, no. 12 (2021): 3014–27. http://dx.doi.org/10.14778/3476311.3476379.

Full text
Abstract:
Facebook's graph store TAO, like many other distributed data stores, traditionally prioritizes availability, efficiency, and scalability over strong consistency or isolation guarantees to serve its large, read-dominant workloads. As product developers build diverse applications on top of this system, they increasingly seek transactional semantics. However, providing advanced features for select applications while preserving the system's overall reliability and performance is a continual challenge. In this paper, we first characterize developer desires for transactions that have emerged over the years and describe the current failure-atomic (i.e., write) transactions offered by TAO. We then explore how to introduce an intuitive read transaction API. We highlight the need for atomic visibility guarantees in this API with a measurement study on potential anomalies that occur without stronger isolation for reads. Our analysis shows that 1 in 1,500 batched reads reflects partial transactional updates, which complicate the developer experience and lead to unexpected results. In response to our findings, we present the RAMP-TAO protocol, a variation based on the Read Atomic Multi-Partition (RAMP) protocols that can be feasibly deployed in production with minimal overhead while ensuring atomic visibility for a read-optimized workload at scale.
APA, Harvard, Vancouver, ISO, and other styles
11

Langsford, Sam. "Integrating payment transaction data, direct from source : Opportunities and limitations for large merchants." Journal of Payments Strategy & Systems 19, no. 1 (2025): 19. https://doi.org/10.69554/anjc8889.

Full text
Abstract:
Within the world of payments, the perceived importance of data continues to increase. In particular, there is a stronger desire for decisions to be data-driven and to leverage technology to improve financial performance. This paper discusses how large merchants can utilise their existing relationships with payment providers to collect high volumes of raw transactional data, straight from the payment provider’s own system. Such data can be utilised to address numerous paymentsrelated profitability challenges as well as to support strategic business plans more broadly. This paper describes a wide range of business use cases that may be facilitated by obtaining and integrating payment data, including process efficiencies, cost control, customer profiling, identifying revenue growth opportunities, and fraud insights. It also discusses the advantages of taking an in-house, raw-format approach to the gathering of payment transactional data, including avoiding added cost on the payment service fee, synergies in internal processing of data to universal standards, and ultimate control of the data once obtained. The paper goes on to address the essential challenges to overcome, notably management engagement and investment in the required infrastructure, and the risk of unrealistic expectations. Finally, the paper encourages large merchants to recognise the high potential value of payment transactional data, and to consider the use of such data in new and more expansive ways.
APA, Harvard, Vancouver, ISO, and other styles
12

Et. al., Divvela Srinivasa Rao,. "A SURVEY ON FREQUENT ITEM SET MINING FOR LARGE TRANSACTIONAL DATA." INFORMATION TECHNOLOGY IN INDUSTRY 9, no. 2 (2021): 885–93. http://dx.doi.org/10.17762/itii.v9i2.426.

Full text
Abstract:
In the decision making process the Data Analytics plays an important role. The Insights that are obtained from pattern analysis gives many benefits like cost cutting, good revenue, and better competitive advantage. On the other hand the patterns of frequent itemsets that are hidden consume more time for extraction when data increases over time. However less memory consumption is required for mining the patterns of frequent itemsets because of heavy computation. Therefore, an algorithm required must be efficient for mining the patterns of the frequent itemsets that are hidden which takes less memory with short run time. This paper presents a review of different algorithms for finding Frequent Patterns so that a more efficient algorithm for finding frequent items sets can be developed.
APA, Harvard, Vancouver, ISO, and other styles
13

Suhara, Yoshihiko, Mohsen Bahrami, Burcin Bozkaya, and Alex ‘Sandy’ Pentland. "Validating Gravity-Based Market Share Models Using Large-Scale Transactional Data." Big Data 9, no. 3 (2021): 188–202. http://dx.doi.org/10.1089/big.2020.0161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Tong, Bing, Yan Zhou, Chen Zhang, et al. "Galaxybase: A High Performance Native Distributed Graph Database for HTAP." Proceedings of the VLDB Endowment 17, no. 12 (2024): 3893–905. http://dx.doi.org/10.14778/3685800.3685814.

Full text
Abstract:
We introduce Galaxybase, a native distributed graph database that addresses the increasing demands for processing large volumes of graph data in diverse industries like finance, manufacturing, and government. Designed to handle the requirements of both transactional and analytical workloads, Galaxybase stands out with its novel data storage and transaction mechanisms. At its core, Galaxybase utilizes a Log-Structured Adjacency List coupled with an Edge Page structure, optimizing read-write operations across a spectrum of tasks such as graph traversals and single edge queries. A notable aspect of Galaxybase is its execution of custom distributed transaction modes tailored for HTAP transactions, allowing for the facilitation of bidirectional and interactive transactions. It ensures data integrity and minimal latency while enabling simultaneous processing of OLTP and OLAP workloads without blocking. Experimental results show that Galaxybase achieves high throughput and low latency in both OLTP and OLAP workloads, across various graph query scenarios and resource conditions. Galaxybase has been deployed in leading banks, education, telecommunication and energy sectors in China, consistently maintaining robust performance for HTAP workloads over the years.
APA, Harvard, Vancouver, ISO, and other styles
15

Ravi, Kiran Koppichetti. "ETL Strategies for Large-Scale Retail Data Warehouses." International Journal of Leading Research Publication 3, no. 8 (2022): 1–12. https://doi.org/10.5281/zenodo.15026506.

Full text
Abstract:
Large-scale retail data warehouses are critical for storing and analyzing vast amounts of transactional, operational, and customer data. Effective ETL (Extract, Transform, Load) strategies are essential for ensuring that data is accurately extracted from diverse sources, transformed into a usable format, and loaded into the data warehouse for analysis. This paper explores the challenges of implementing ETL processes in large-scale retail data warehouses and provides strategies for optimizing ETL workflows. Key topics include data integration, scalability, performance optimization, and the use of modern ETL tools and technologies. The paper concludes with recommendations for designing robust ETL pipelines that meet the demands of the retail industry.
APA, Harvard, Vancouver, ISO, and other styles
16

Raja Rao Budaraju. "Optimized Privacy Preserved Itemset Mining using Federated Learning from Transactional Data in Data Mining." Advances in Nonlinear Variational Inequalities 28, no. 1s (2024): 460–69. https://doi.org/10.52783/anvi.v28.2446.

Full text
Abstract:
Federated learning allows you to train a global machine-learning model without requiring you to move data from one location to another. This is especially important for applications in the healthcare industry, where data is full of sensitive, personally identifiable data, and data analysis techniques need to demonstrate that they adhere to legal requirements. The created machine learning model or the model variables that are made public during training can still be the target of privacy attacks, even when federated learning forbids the sharing of raw data. In this research, we first present an embedding model for the transaction classification job based on federated learning. Transaction data is viewed by the model as a collection of frequent item sets. After that, by maintaining the contextual relationship between frequent item sets, the algorithm can learn low-dimensionality continuous matrices. We conduct a thorough experimental investigation on a large volume of high-dimensional transactional data to validate the created models that incorporate federated learning and attention-based techniques. Our investigations demonstrate how the categorization might aid in the design of federated learning systems. We provide the design considerations, case cases, and prospects for future research by methodically summarising the current federated learning systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Eriksson, Kent, and Cecilia Hermansson. "Do consumers subjectively perceive relationships in objectively defined relational, interimistic, and transactional exchange in financial services?" International Journal of Bank Marketing 35, no. 3 (2017): 472–94. http://dx.doi.org/10.1108/ijbm-09-2016-0130.

Full text
Abstract:
Purpose Customer interactions with sellers change as social interactions in society change. The old dichotomy between transaction and relation exchange may no longer be valid as customers form relationships with sellers in new ways. It is against this background that the authors study how customers’ subjective perception of relational exchange appears in objectively defined transactional and relational exchange forms. The authors study one bank’s customers, and, based on objective bank records, the authors identify segments that behave as transactional and relational customers. The authors also identify a group of customers who are in between transactional and relational, and the authors call these interimistic relational, since they interact repeatedly with the bank in a short period of time. The paper aims to discuss these issues. Design/methodology/approach The authors study how subjective attributes of relational exchange differ in objectively defined transactional, interimistic, and relational customer groups. The authors use a large data set, consisting of a combination of survey and objective bank records for 90,528 bank customers. Findings Findings are that the old dichotomy between transaction and relation is no longer valid, since customers’ exchange behavior and perception of exchange do not match up when it comes to the transaction-relation dichotomy. The authors find empirical evidence for that the subjective relational attributes can be observed in objectively defined relational, interimistic, and transactional customer groups. Overall, subjective relational attributes are strongest in the objective relational group; they are weaker in the interimistic group. Relational attributes are weakest, but still present, in the transactional group. Practical implications The findings presented here suggest strong support for relationship marketing practice, since even customers who behave transactionally perceive that they have an element of relationship with the seller. The authors find that customers may behave in a relational, interimistic, and transactional way, but that they perceive themselves as more or less relational. The practical implication is that customer analysis should focus on exchange forms, and that it is essential to analyze how exchange changes, and how multiple exchange forms may be combined in customer behavior and perception. Social implications The social implications of this paper are that marketers should consider the exchange between customer and financial service supplier as more or less of a relationship, and more or less of a service. Financial service firm strategies and regulation of financial services should acknowledge that no financial service transaction is independent of the relationship between the financial service provider and the customer. It may seem so objectively, but subjectively, it is not. Originality/value The authors present a unique comparison of objective and subjective customer exchange. There are two contributions that come from this research. The first is that customers perceive themselves as partially relational, even though they behave transactionally. The other contribution is that the authors identified interimistic relational exchange (IRE) as an exchange form in between relational and transactional. IRE can potentially be very important for market research and practice, as it captures modern market behavior. In today’s world, consumers form their perceptions in a multitude of ways, and may therefore have relational attitudes and transactional behaviors. More research is needed into how consumer perceptions and behaviors relate to each other, and how it impacts consumer purchase of financial services.
APA, Harvard, Vancouver, ISO, and other styles
18

Miranda, Eka, Rudy Rudy, and Eli Suryani. "Implemetasi Data Warehouse pada Bagian Pemasaran Perguruan Tinggi." ComTech: Computer, Mathematics and Engineering Applications 3, no. 1 (2012): 315. http://dx.doi.org/10.21512/comtech.v3i1.2417.

Full text
Abstract:
Transactional data are widely owned by higher education institutes, but the utilization of the data to support decision making has not functioned maximally. Therefore, higher education institutes need analysis tools to maximize decision making processes. Based on the issue, then data warehouse design was created to: (1) store large-amount data; (2) potentially gain new perspectives of distributed data; (3) provide reports and answers to users’ ad hoc questions; (4) perform data analysis of external conditions and transactional data from the marketing activities of universities, since marketing is one supporting field as well as the cutting edge of higher education institutes. The methods used to design and implement data warehouse are analysis of records related to the marketing activities of higher education institutes and data warehouse design. This study results in a data warehouse design and its implementation to analyze the external data and transactional data from the marketing activities of universities to support decision making.
APA, Harvard, Vancouver, ISO, and other styles
19

Koutsofios, Eleftherios, Stephen North, and Russ Truscott. "Large scale network visualization with 3D-graphics." Theme: Information landscapes 10, no. 3 (2001): 230–36. http://dx.doi.org/10.1075/idj.10.3.04kou.

Full text
Abstract:
Data from large networks, such as a voice telephone network or a residential internet service, must be analysed to see how well the requirements of users are met. Visualization is essential to understand an explain this data. This article describes Swift-3D, a viewer for interactive discovery of patterns and anomalies in large-scale transactional data sets.
APA, Harvard, Vancouver, ISO, and other styles
20

Irawan, Ragil Yoga, Budi Susanto, and Yuan Lukito. "Building Data Warehouse and Dashboard of Church Congregation Data." Jurnal Terapan Teknologi Informasi 3, no. 2 (2021): 85–94. http://dx.doi.org/10.21460/jutei.2019.32.183.

Full text
Abstract:
A data warehouse is essential for an organization to process and analyze data coming from the organization. Hence, a data warehouse together with a dashboard to visualize the processed data are built to accommodate the need of the church administrator to analyze a large set of church congregation data. The data warehouse is built using the Kimball principle. This Kimball principle emphasizes the implementation of a dimensional model in the data warehouse, not a relational model used in a regular transactional database. An ETL process that contains extract, transform and load processes is used to retrieve all data from the regular transactional database and transform the data so the data can be loaded into the data warehouse. A dashboard is then built to visualize the data from the data warehouse so the users can view the processed data easily. Users can also export the processed data into an excel file that can be downloaded from the dashboard. A web service is built to get data from the data warehouse and return it to the dashboard.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhao, Lin, Lefei Li, and Zuo‐Jun Max Shen. "Transactional and in‐store display data of a large supermarket for data‐driven decision‐making." Naval Research Logistics (NRL) 67, no. 8 (2020): 617–26. http://dx.doi.org/10.1002/nav.21957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Dora, Begum, and Nazli Baydar. "Transactional associations of maternal depressive symptoms with child externalizing behaviors are small after age 3." Development and Psychopathology 32, no. 1 (2019): 293–308. http://dx.doi.org/10.1017/s0954579419000075.

Full text
Abstract:
AbstractA large and growing body of research suggests that maternal depressive symptoms and child externalizing behaviors are strongly associated. Theoretical arguments supported by these findings led to the question of whether maternal depressive symptoms are transactionally associated with child externalizing behaviors. Using 5-year nationally representative longitudinal data from Turkey (N = 1,052), we estimated a transactional bivariate autoregressive latent trajectory model addressing this question. This model disaggregated the association of the two processes into two components: (a) the association of the interindividual differences in the trajectories; and (b) the intradyad association of the changes in maternal depressive symptoms with the changes in child externalizing behaviors. Although maternal depressive symptoms were robustly associated with child externalizing behaviors at age 3, the transactional associations of the two processes were small prior to age 5 and absent at ages 5 to 7. Furthermore, maternal harsh parenting did not have a mediating role in the limited transactional association of maternal depressive symptoms with child externalizing behaviors.
APA, Harvard, Vancouver, ISO, and other styles
23

Shah, Syed Mir Muhammad, and Kamal Bin Ab. Hamid. "Transactional Leadership and Job Performance: An Empirical Investigation." Sukkur IBA Journal of Management and Business 2, no. 2 (2015): 74. http://dx.doi.org/10.30537/sijmb.v2i2.94.

Full text
Abstract:
Present study investigates the relationship between transactional leadership and job performance in the six large banks of Pakistan. The survey method was used to collect data from the middle managers of six large banks of Pakistan. The data was analyzed and reported using Smart-PLS and its standard reporting style. The findings of the study reveal that transactional leadership has significant relationship with job performance. The last part of the paper presents insights on future research.
APA, Harvard, Vancouver, ISO, and other styles
24

Yamaguchi, Takehiro, and Ayahiko Niimi. "Extraction of Community Transition Rules from Data Streams as Large Graph Sequence." Journal of Advanced Computational Intelligence and Intelligent Informatics 15, no. 8 (2011): 1073–81. http://dx.doi.org/10.20965/jaciii.2011.p1073.

Full text
Abstract:
In this study, we treat transactional sets of data streams as a graph sequence. This graph sequence represents both the relational structures of data for each period and changes in these structures. In addition, we analyze changes in a community in this graph sequence. Our proposed algorithm extracts community transition rules to detect communities that appear irregularly in a graph sequence using our proposed method combined with adaptive graph kernels and hierarchical clustering. In experiments using synthetic datasets and social bookmark datasets, we demonstrate that our proposed algorithm detects changes in a community appearing irregularly.
APA, Harvard, Vancouver, ISO, and other styles
25

Menon, Syam, and Sumit Sarkar. "Privacy and Big Data: Scalable Approaches to Sanitize Large Transactional Databases for Sharing." MIS Quarterly 40, no. 4 (2016): 963–81. http://dx.doi.org/10.25300/misq/2016/40.4.08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Mrs., Shweta A. Dubey* Prof. Kemal. Koche. "A SURVEY PAPER ON HIGH UTILITY ITEMSETS MINING." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 5, no. 5 (2016): 852–57. https://doi.org/10.5281/zenodo.52492.

Full text
Abstract:
An important data mining task that has received considerable research attention in recent years is the discovery of association rules from the transactional databases. Recently, Utility mining plays a vital role in data mining. To discover high utility itemset from transactional database means discovering item sets with high profits. In this survey paper, we discuss about various methods and algorithms which were used for recovering high utility itemsets from a large database without losing large amount of information.We present different kind of algorithm such as CHUD(Closed High Utility Itemset Discovery) for mining closed itemset and further a method called DAHU which discovers all high utility itemsets from a result generated after applying CHUD algorithm .Itemset mining has a wide range of applications in biomedical applications, retail stores, super market etc. &nbsp;
APA, Harvard, Vancouver, ISO, and other styles
27

Samantapudi, Rama Krishna Raju. "Table Extraction from Financial and Transactional Documents." International journal of IoT 05, no. 01 (2025): 95–125. https://doi.org/10.55640/ijiot-05-01-06.

Full text
Abstract:
With the proliferation of digital financial services and digital transactional documents, data volumes are vastly increasing, including invoices, receipts, bank statements, and balance sheets. The document has garnered massive interest and a keen interest in handling Information extraction from these documents. For such documents, manual data extraction is time-consuming and prone to human error as the documents come in many formats. This paper covers techniques, tools, and technology in the case of extracting tables from financial and transactional documents, specifically in the case of vertical tables and in the presence of mixed-type data representations. Table extraction means extracting tabular data from a readable image schema document and transforming it into a structured format (CSV / JSON). The paper discusses other extraction methods, such as rule-based extraction, optical character recognition (OCR), and machine learning models. The book also covers some use cases from industry banking, e-commerce, or accounting, amongst other industries. The paper then discusses ethical and legal implications such as GDPR, HIPAA, compliance with data privacy laws, and how it should be transparent and fair for AI systems. Last but not least, the future trends of table extraction, including integration of generative AI and large language models (LLMs) and robotic process automation (RPA), as well as real-time data extraction, are discussed. This paper presents the growing demand for advanced extraction technologies to increase financial document processing accuracy, efficiency, and scalability.
APA, Harvard, Vancouver, ISO, and other styles
28

Shah, Chirag Vinalbhai. "Privacy-Preserving Digital Payments: AI and Big Data Integration for Secure Biometric Authentication." Global Research and Development Journals 4, no. 12 (2019): 1–9. http://dx.doi.org/10.70179/grdjev09i100014.

Full text
Abstract:
The goal of creating efficient digital payment services leads to the development of big datasets, which carry a large number of young and old persons' everyday transactional histories through AI and other computer science methodologies. However, how to ensure the processing of these national-scale, big-data attributes? In this article, we introduce a privacy-preserving large-scale digital payment method, which leverages AI techniques and abundant digital payment receipts timely obtained at individual and regional levels, for secure biometric authentications, employment, and other frequent services. Proposition successes rely on the merging of each individual's daily payment datasets and the relative longtime variance of biometric datasets in urban social security mobile applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Jamdar, Nikhil, and A. Vijayalakshmi. "BIG DATA MINING FOR INTERESTING PATTERNS WITH MAP REDUCE TECHNIQUE." Asian Journal of Pharmaceutical and Clinical Research 10, no. 13 (2017): 191. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.19634.

Full text
Abstract:
There are many algorithms available in data mining to search interesting patterns from transactional databases of precise data. Frequent pattern mining is a technique to find the frequently occurred items in data mining. Most of the techniques used to find all the interesting patterns from a collection of precise data, where items occurred in each transaction are certainly known to the system. As well as in many real-time applications, users are interested in a tiny portion of large frequent patterns. So the proposed user constrained mining approach, will help to find frequent patterns in which user is interested. This approach will efficiently find user interested frequent patterns by applying user constraints on the collections of uncertain data. The user can specify their own interest in the form of constraints and uses the Map Reduce model to find uncertain frequent pattern that satisfy the user-specified constraints
APA, Harvard, Vancouver, ISO, and other styles
30

Shumovskaia, Valentina, Kirill Fedyanin, Ivan Sukharev, Dmitry Berestnev, and Maxim Panov. "Linking bank clients using graph neural networks powered by rich transactional data." International Journal of Data Science and Analytics 12, no. 2 (2021): 135–45. http://dx.doi.org/10.1007/s41060-021-00247-3.

Full text
Abstract:
AbstractFinancial institutions obtain enormous amounts of data about client transactions and money transfers, which can be considered as a large graph dynamically changing in time. In this work, we focus on the task of predicting new interactions in the network of bank clients and treat it as a link prediction problem. We propose a new graph neural network model, which uses not only the topological structure of the network but rich time-series data available for the graph nodes and edges. We evaluate the developed method using the data provided by a large European bank for several years. The proposed model outperforms the existing approaches, including other neural network models, with a significant gap in ROC AUC score on link prediction problem and also allows to improve the quality of credit scoring.
APA, Harvard, Vancouver, ISO, and other styles
31

Javidi, Mohamad Masood, Najme Mansouri, and Asghar Asadi Karam. "Data Management Challenges in Cloud Environments." Computer Engineering and Applications Journal 3, no. 3 (2014): 158–71. http://dx.doi.org/10.18495/comengapp.v3i3.105.

Full text
Abstract:
Recently the cloud computing paradigm has been receiving special excitement and attention in the new researches. Cloud computing has the potential to change a large part of the IT activity, making software even more interesting as a service and shaping the way IT hardware is proposed and purchased. Developers with novel ideas for new Internet services no longer require the large capital outlays in hardware to present their service or the human expense to do it. These cloud applications apply large data centers and powerful servers that host Web applications and Web services. This report presents an overview of what cloud computing means, its history along with the advantages and disadvantages. In this paper we describe the problems and opportunities of deploying data management issues on these emerging cloud computing platforms. We study that large scale data analysis jobs, decision support systems, and application specific data marts are more likely to take benefit of cloud computing platforms than operational, transactional database systems.
APA, Harvard, Vancouver, ISO, and other styles
32

Lv, Xiao, Yong Jie Li, and Xu Lu. "A Web Data Mining Algorithm Based on Weighted Association Rules." Key Engineering Materials 467-469 (February 2011): 1386–91. http://dx.doi.org/10.4028/www.scientific.net/kem.467-469.1386.

Full text
Abstract:
Association rules mining is attracting much attention in research community due to its broad applications. Existing web data mining methods suffer the problems that 1) the large number of candidate itemsets, which are hard to be pruned, should be pruned in advance. 2) the time of scanning the database, which are needed to scan transactional database repeatedly, should be reduced. In this paper, a new association rules mining model is introduced for overcoming above two problems. We develop an efficient algorithm-WARDM(Weighted Association Rules Data Mining) for mining the candidate itemsets. The algorithm discusses the generation of candidate-1 itemset, candidate-2 itemset and candidate-k itemset(k&gt;2),which can avoid missing weighted frequent itemsets. And the transactional database are scanned only once and candidate itemsets are pruned twice, which can reduce the amount of candidate itemsets. Theoretical analysis and experimental results show the space and time complexity is relatively good, Meanwhile the algorithm decreases the number of candidate itemsets, enhances the execution efficiency.
APA, Harvard, Vancouver, ISO, and other styles
33

Rahman, Md Shafiqur, Proshanta Kumar Bhowmik, Balayet Hossain, et al. "Enhancing Fraud Detection Systems in the USA: A Machine Learning Approach to Identifying Anomalous Transactions." Journal of Economics, Finance and Accounting Studies 5, no. 5 (2023): 145–60. https://doi.org/10.32996/jefas.2023.5.5.15.

Full text
Abstract:
The landscape of financial fraud in the United States is more advanced today, with fraudsters adopting sophisticated methods that elude traditional detection systems. As digital payments gain popularity, the number of potential fraud cases and their sophistication also increased, causing heavy financial losses to institutions and consumers alike. The primary objective of this research was to design and implement machine learning models that can significantly improve fraud detection systems in their precision. This study was centered specifically on fraud detection in the US financial system, researching artificial intelligence approaches that can be applied to support anomaly detection and risk analysis processes. The dataset employed in the analysis is a high-level transaction dataset that includes a spectrum of financial transaction details. Each transaction entry included primary details such as timestamp, transaction value, and sender-receiver information. The timestamp enabled each transaction to be sorted in chronological order, making it possible to carry out time-series analysis of patterns such as maximum transaction time or seasonality in spending behavior. Three models were predominantly employed: Random Forest Classifier, Logistic Regression, and Support Vector Classifier. The performance of models was measured using a set of metrics that included accuracy, precision, recall, F1-score, and ROC-AUC. The Random Forest model was better in terms of higher accuracy, thanks to its ability to handle non-linear relationships via ensemble learning. The integration of machine learning in fraud detection enhances the capabilities of payment providers and financial institutions tremendously. With sophisticated algorithms, financial institutions can process large volumes of transactional data in real time, enabling them to detect anomalous patterns that speedily indicate fraud. The findings of this study reinforce the effectiveness of machine learning models in identifying anomalous transactions, verifying that advanced approaches such as Random Forest and Support Vector Machines significantly enhance fraud detection compared to legacy approaches. One key to such effectiveness is that feature selection is crucial; carefully chosen features that included user behavior and transactional context played a key role in increasing detection rates and eliminating false positives.
APA, Harvard, Vancouver, ISO, and other styles
34

Theofilou, Asterios, Stefanos A. Nastis, Michail Tsagris, Santiago Rodriguez-Perez, and Konstadinos Mattas. "Design and Implementation of a Scalable Data Warehouse for Agricultural Big Data." Sustainability 17, no. 8 (2025): 3727. https://doi.org/10.3390/su17083727.

Full text
Abstract:
The rapid growth of agricultural data necessitates the development of storage systems that are scalable and efficient in storing, retrieving and analyzing very large datasets. The traditional relational database management systems (RDBMSs) struggle to keep up with large-scale analytical queries due to the volume and complexity inherent in those data. This study presents the design and implementation of a scalable data warehouse (DWH) system for agricultural big data. The proposed solution efficiently integrates data and optimizes data ingestion, transformation, and query performance, leveraging a distributed architecture based on HDFS, Apache Hive, and Apache Spark, deployed on dockerized Ubuntu Linux environments. This paper highlights the reasons why a DWH is irreplaceable for big data processing, without disputing the strengths of traditional databases in transactional use cases. By detailing the architectural choices and implementation strategy, this study provides a practical framework for deploying robust DWH solutions that are useful in supporting agricultural research, market predictions and policy decision-making.
APA, Harvard, Vancouver, ISO, and other styles
35

Meher, Dr Dipali, Dr Pallawi Bulakh, and Prof Meena Jabde. "Learning Graph Databases: Neo4j an overview." International Journal of Engineering Applied Sciences and Technology 8, no. 2 (2023): 216–19. http://dx.doi.org/10.33564/ijeast.2023.v08i02.033.

Full text
Abstract:
As the internet is growing day by day, the amount of data being generated is huge. This data includes structured data and unstructured data. The data along with its relationship with other data makes the most powerful and meaningful information. Maximum data exists in the form of the relationship between different or same objects and the noticeable thing is the relationship between the data is more important than the data itself. These relationships are handled efficiently by Relational databases that store data having structures and which have several records. The important point to be noted here is that these Relational database management Systems use tables with normalization concept. If the amount of data in such tables is huge, then handling such a large amount of data with its relationships is a tedious task. Here, Graph Databases come into picture. Entities and their relationships in relational databases will be reflected with nodes and relationships in graph databases. Graph databases provide very simple data model than databases with Online Transaction Processing systems. Graph databases provide features such as transactional integrity and operational availability. This paper introduces the idea of graph database systems in conjunction with Neo4j encompassing its query features, consistency, transactions, availability, and scaling.
APA, Harvard, Vancouver, ISO, and other styles
36

Mong, Sylvia Gala, Ruth Lua Ejau, Roseline Ikau, and Evie SendiIbil. "Unveiling Blockchain Technology in Construction Supply Chain Management: The What, When, Who, Where, and How Towards Digitalization." International Journal of Research and Innovation in Social Science VIII, no. VIII (2024): 3631–58. http://dx.doi.org/10.47772/ijriss.2024.808054s.

Full text
Abstract:
Digitalization and technology disruptions have ushered in the Fourth Industrial Revolution (IR 4.0), which is now required for many sectors. On the other hand, the building sector is seen to be among those that is resistant to change. The possibility of implementing blockchain technology in the building sector has not received much attention. The cryptocurrency space is not new to blockchain technology. In essence, it is a public ledger or distributed database that contains all the executed and shared digital events or transactions between all involved parties. Herein lies the utility of blockchain technology, particularly in the highly transactional construction sector.Blockchain technology records every transaction and arranges all the “blocks” into a “chain,” acting as a middleman in every transaction we make to foster trust. In the construction business, supply chain management spans a range of stakeholders, including contractors, manufacturers, consultants, and clients. It also covers numerous construction organization activities, starting with planning, design, construction, and maintenance. Numerous activities lead to a large volume of transactions that impede communication and cooperation between these parties and result in expensive arbitration or lawsuit cases. It has been acknowledged that blockchain technology is one of the cutting-edge innovations that have the potential to revolutionize several sectors.Using blockchain technology has several benefits, such as lower transaction costs, protection against data tampering and falsification, and more flexibility. Because it involves so many transactions between many parties, the construction sector is typically seen as having a lot of potential for using blockchain technology. There aren’t many examples of blockchain uses in the construction sector, despite its apparent benefits.
APA, Harvard, Vancouver, ISO, and other styles
37

Fouad, Mohammed M., Mostafa G. M. Mostafa, Abdulfattah S. Mashat, and Tarek F. Gharib. "IMIDB: An Algorithm for Indexed Mining of Incremental Databases." Journal of Intelligent Systems 26, no. 1 (2017): 69–85. http://dx.doi.org/10.1515/jisys-2015-0107.

Full text
Abstract:
AbstractAssociation rules provide important knowledge that can be extracted from transactional databases. Owing to the massive exchange of information nowadays, databases become dynamic and change rapidly and periodically: new transactions are added to the database and/or old transactions are updated or removed from the database. Incremental mining was introduced to overcome the problem of maintaining previously generated association rules in dynamic databases. In this paper, we propose an efficient algorithm (IMIDB) for incremental itemset mining in large databases. The algorithm utilizes the trie data structure for indexing dynamic database transactions. Performance comparison of the proposed algorithm to recently cited algorithms shows that a significant improvement of about two orders of magnitude is achieved by our algorithm. Also, the proposed algorithm exhibits linear scalability with respect to database size.
APA, Harvard, Vancouver, ISO, and other styles
38

Olajide Soji Osundare, Chidiebere Somadina Ike, Ololade Gilbert Fakeyede, and Adebimpe Bolatito Ige. "Application of Machine Learning in Detecting Fraud in Telecommunication-Based Financial Transactions." Computer Science & IT Research Journal 4, no. 3 (2023): 458–77. http://dx.doi.org/10.51594/csitrj.v4i3.1499.

Full text
Abstract:
The increasing integration of telecommunications with financial services has brought about significant advancements in the accessibility and efficiency of financial transactions. However, this convergence has also led to a rise in fraudulent activities, posing substantial risks to both service providers and users. The application of machine learning (ML) in detecting fraud within telecommunication-based financial transactions offers a promising solution to these challenges. This abstract explores the potential of ML techniques to enhance the detection and prevention of fraud in this domain. Machine learning algorithms, particularly those specializing in anomaly detection, pattern recognition, and predictive modeling, are well-suited to identifying fraudulent activities in real-time. These algorithms can analyze vast amounts of transaction data to detect irregularities that may indicate fraud, such as unusual transaction patterns, deviations from normal behavior, and other red flags that traditional rule-based systems might overlook. By continuously learning from new data, ML models can adapt to emerging fraud tactics, making them highly effective in a rapidly evolving threat landscape. Furthermore, the integration of ML with big data analytics allows for the processing and analysis of large-scale transactional data, enhancing the accuracy and speed of fraud detection. Techniques such as supervised learning, unsupervised learning, and reinforcement learning are particularly effective in categorizing transaction types and identifying potential frauds with minimal human intervention. The use of ML also enables the automation of fraud detection processes, reducing operational costs and increasing the efficiency of fraud management systems. This abstract highlights the critical role of machine learning in enhancing the security of telecommunication-based financial transactions. The ability of ML to detect and prevent fraud in real-time not only mitigates risks but also improves trust and reliability in telecommunication financial services. As fraudsters continue to develop sophisticated methods, the ongoing refinement of ML algorithms will be essential in maintaining robust defenses against financial fraud in the telecommunication sector. Keywords: ML, Detecting, Fraud, Telecommunication-Based, Financial Transactions.
APA, Harvard, Vancouver, ISO, and other styles
39

Miranda, Eka. "Desain Data Warehouse pada Sistem Informasi Sumber Daya Manusia Sub-Sistem Rekrutmen." ComTech: Computer, Mathematics and Engineering Applications 3, no. 1 (2012): 307. http://dx.doi.org/10.21512/comtech.v3i1.2416.

Full text
Abstract:
Employees as a resource management are essential to improve the effectiveness of company’s performance and process efficiency. This paper discusses the implementation of data warehouse and its role in assisting the decision making related to recruitment activities undertaken by Human Resources Department. In this research built a data warehouse design to store large amounts of data and to gain potentially a new perspective of data distribution as well as to provide reports and solutions for users the ad hoc question and to analyze the transactional data. This study aims to design a data warehouse to support accurate decision making related to human resource management in order to create high performance productivity. The method used in this paper consists of: (1) data collection using interview and literature study related to employee recruitment and (2) data warehouse design derived from Teh Ying Wah et al. This research results in a data warehouse design and its implementation to analyze transactional data from the related activities of recruitment and employee management to support decision making.
APA, Harvard, Vancouver, ISO, and other styles
40

Freitag, Michael, Alfons Kemper, and Thomas Neumann. "Memory-optimized multi-version concurrency control for disk-based database systems." Proceedings of the VLDB Endowment 15, no. 11 (2022): 2797–810. http://dx.doi.org/10.14778/3551793.3551832.

Full text
Abstract:
Pure in-memory database systems offer outstanding performance but degrade heavily if the working set does not fit into DRAM, which is problematic in view of declining main memory growth rates. In contrast, recently proposed memory-optimized disk-based systems such as Umbra leverage large in-memory buffers for query processing but rely on fast solid-state disks for persistent storage. They offer near in-memory performance while the working set is cached, and scale gracefully to arbitrarily large data sets far beyond main memory capacity. Past research has shown that this architecture is indeed feasible for read-heavy analytical workloads. We continue this line of work in the following paper, and present a novel multi-version concurrency control approach that enables a memory-optimized disk-based system to achieve excellent performance on transactional workloads as well. Our approach exploits that the vast majority of versioning information can be maintained entirely in-memory without ever being persisted to stable storage, which minimizes the overhead of concurrency control. Large write transactions for which this is not possible are extremely rare, and handled transparently by a lightweight fallback mechanism. Our experiments show that the proposed approach achieves transaction throughput up to an order of magnitude higher than competing disk-based systems, confirming its viability in a real-world setting.
APA, Harvard, Vancouver, ISO, and other styles
41

Basani, Maria Anurag Reddy. "Optimizing Cloud Data Storage: Evaluating File Formats for Efficient Data Warehousing." International Journal for Research in Applied Science and Engineering Technology 12, no. 10 (2024): 922–31. http://dx.doi.org/10.22214/ijraset.2024.64753.

Full text
Abstract:
This paper presents a detailed analysis of three widely-used data storage formats—Parquet, Avro, and ORC— evaluating their performance across key metrics such as query execution, compression efficiency, data skipping, schema evolution, and throughput. Each format offers distinct advantages depending on the nature of the workload. Parquet is optimized for read-heavy analytical queries, providing excellent compression and efficient query performance through its columnar structure. Avro excels in write-heavy, real-time data streaming scenarios, where schema flexibility and backward compatibility are crucial. ORC balances the two, offering strong support for analytical and transactional workloads, especially in handling complex queries and nested data structures. This comparative study highlights the contexts in which each format performs best, providing valuable insights into the trade-offs associated with their use in cloud data warehouses and large-scale data processing environments.
APA, Harvard, Vancouver, ISO, and other styles
42

San, San Nwe, Khin Lay Khin, and Myint Yee Myint. "Delivery Feet Data using K Mean Clustering with Applied SPSS." International Journal of Trend in Scientific Research and Development 3, no. 5 (2019): 1944–45. https://doi.org/10.5281/zenodo.3591719.

Full text
Abstract:
Data mining refers to extracting or mining knowledge from large amounts of data. Many people treat data mining as a synonym for another popularly used term, knowledge discover from data or KDD. Data can be mined such as relational databases, data warehouses, transactional databases, advanced data and information systems and advance applications. The construction of clustering model which classify with car driving analysis using K mean clustering algorithm. The dataset was downloading from Google.com. San San Nwe | Khin Khin Lay | Myint Myint Yee &quot;Delivery Feet Data using K-Mean Clustering with Applied SPSS&quot; Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26816.pdf
APA, Harvard, Vancouver, ISO, and other styles
43

Shamie, Mohamad Mohamad, and Muhammad Mazen Almustafa. "Improving Association Rule Mining Using Clustering-Based Data Mining Model for Traffic Accidents." Review of Computer Engineering Studies 8, no. 3 (2021): 65–70. http://dx.doi.org/10.18280/rces.080301.

Full text
Abstract:
Data mining is a process of knowledge discovery to extract the interesting, previously unknown, potentially useful, and nontrivial patterns from large data sets. Currently, there is an increasing interest in data mining in traffic accidents, which makes it a growing new research community. A large number of traffic accidents in recent years have generated large amounts of traffic accident data. The mining algorithms had a great role in determining the causes of these accidents, especially the association rule algorithms. One challenging problem in data mining is effective association rules mining with the huge transactional databases, many efforts have been made to propose and improve association rules mining methods. In the paper, we use the RapidMiner application to design a process that can generate association rules based on clustering algorithms.
APA, Harvard, Vancouver, ISO, and other styles
44

Dr.Suryanarayana, N. R. "AI-BASED GST FRAUD DETECTION SYSTEM FOR LARGE-SCALE E-COMMERCE PLATFORMS." International Journal of E-Government & E-Business Research 10, no. 1 (2025): 01–06. https://doi.org/10.5281/zenodo.15273511.

Full text
Abstract:
<em>The rapid growth of e-commerce platforms has multiplied shall we say the fraudulent activities related to Goods and Service Tax (GST). Due to large scale as well as evolving fraud patterns, traditional fraud detection mechanisms are unable to scale and effectively detect fraud. To identify such hidden relationships between entities (sellers, transactions and their tax records), this research presents an AI based GST fraud detection system employing Graph Neural Networks (GNNs). The system uses Google Cloud AI to deploy the system on the scale to handle processing of huge amounts of transactional data, spotting anomalies, and alerting on potential fraud real time. The proposed solution increases fraud detection accuracy while decreasing false positives using different patterns of tax evasion. We conduct experiments for large scale e-commerce ecosystem and our approach is found to be effective in identifying fraudulent activities in such scenarios.</em> <em>&nbsp;</em> <strong><em>Keywords: </em></strong>GST Fraud Detection, Graph Neural Networks (GNNs), E-Commerce Security, Anomaly Detection, Google Cloud AI, Large-Scale Tax Compliance
APA, Harvard, Vancouver, ISO, and other styles
45

Kodi, Divya. "Designing Real-time Data Pipelines for Predictive Analytics in Large-scale Systems." FMDB Transactions on Sustainable Computing Systems 2, no. 4 (2024): 178–88. https://doi.org/10.69888/ftscs.2024.000294.

Full text
Abstract:
With the age of data decision-making, real-time data pipelines have become an integral building block for predictive analytics in big-scale systems. The article outlines the design, deployment, and challenge of creating reliable real-time data pipelines for predictive analytics at scale. Data used in this research is sourced from an e-commerce site, involving transactional information, customer behaviour, product surfing, and purchasing interactions. The data set consists of many structured and unstructured data and perfectly signifies the complexity involved while processing high-velocity, big-scale data for predictive analytics. We touch upon key domains such as ingestion, processing, storage, and analytics and also talk about varied architectures such as Lambda and Kappa that offer fault-tolerant scalability. We talk about employing machine learning models and consuming streams of real-time data and predictive models for deriving actionable insight for big systems. Apart from technology and operations-based needs, this paper also describes the best practices, tools, and frameworks necessary to correctly implement real-time data pipelines for predictive analytics. The study emphasizes pipeline optimization with low latency, high throughput, and fault tolerance to enable long-term and precise predictions.
APA, Harvard, Vancouver, ISO, and other styles
46

Mustika, Tiara, and Rodiyah Rodiyah. "Political Dowry in the Maelstrom of Political Practices in Indonesia: Legal and Political Aspects." Journal of Law and Legal Reform 4, no. 1 (2023): 49–72. http://dx.doi.org/10.15294/jllr.v4i1.64398.

Full text
Abstract:
Political dowry can be understood as an underhand transaction that involves the provision of large amounts of funds from a candidate for a position contested in the General Election or Regional Head Election with a political party as the political vehicle. A final political decision can change due to transactions in order to change the political attitudes or actions of the people who are influenced. In transactional politics, power or power is at play. Those who want a change in political attitudes and actions from political actors (friends or foes) will use power. The research method used is a normative method that uses secondary data sources or data obtained through literature, books, etc. In politics, political bargaining can be in the form of threats of punishment (giving sticks or sticks), can also be in the form of profitable offers (giving carrots or carrots). The power base of the first party to influence the second party varies, it can be in the form of money, political position or control of negative information about the second party. The practice of unreasonable political dowry in all political activities also has a negative impact on the progress of the country's development. For example, costs that are too high will have an impact on the possibility of greater corruption.
APA, Harvard, Vancouver, ISO, and other styles
47

Priyanka, Gowda Ashwath Narayana Gowda. "SQL vs. NoSQL Databases: Choosing the Right Option for FinTech." European Journal of Advances in Engineering and Technology 7, no. 8 (2020): 100–104. https://doi.org/10.5281/zenodo.13950855.

Full text
Abstract:
The paper discusses the critical decision-making in choosing between SQL and NoSQL databases for FinTech applications. FinTech, founded on large-scale data processing, transactional integrity, and real-time analytics, warrants robust and highly scalable database solutions. SQL databases are very suitable for applications such as payment processing, customer relationship management, and core banking systems because of their strong consistency, reliability, and mature ecosystem. On the other hand, NoSQL databases offer flexibility in handling unstructured data, horizontal scalability, and high availability for big data analytics, real-time fraud detection, and personalized finance services. The paper contrasts SQL and NoSQL databases concerning data structure, scalability, consistency, and availability statements of strengths and limitations in FinTech. We provide insights into which database type would be more applicable for specific FinTech applications through several practical use cases and performance evaluations. The analysis describes that SQL databases are very relevant in cases with high transactional integrity within the application or system and structured data management. In contrast, a NoSQL database would find an application in scenarios requiring flexibility and scalability with diverse data types. FinTech companies, thereby, have to think very carefully about individual needs and options to choose the right database technology, ensuring it aligns with operational requirements and strategies for future growth.
APA, Harvard, Vancouver, ISO, and other styles
48

Anantharaman, Padmanathan, and H. V. Ramakrishan. "Data Mining Itemset of Big Data Using Pre-Processing Based on Mapreduce FrameWork with ETL Tools." APTIKOM Journal on Computer Science and Information Technologies 2, no. 2 (2017): 57–62. http://dx.doi.org/10.11591/aptikom.j.csit.103.

Full text
Abstract:
As data volumes continue to grow, they quickly consume the capacity of data warehouses and application databases. Is your IT organization forced into costly upgrades to expensive databases and data warehouse hardware appliances and enormous amount of data is getting explored through Internet of Things (IoT) as technologies are advancing and people uses these technologies in day to day activities, this data is termed as Big Data having its characteristics and challenges. Frequent Itemset Mining algorithms are aimed to disclose frequent itemsets from transactional database but as the dataset size increases, it cannot be handled by traditional frequent itemset mining. MapReduce programming model solves the problem of large datasets but it has large communication cost which reduces execution efficiency. This proposed new pre-processed k-means technique applied on BigFIM algorithm. ClustBigFIM uses hybrid approach, clustering using k-means algorithm to generate Clusters from huge datasets and Apriori and Eclat to mine frequent itemsets from generated clusters using MapReduce programming model. Results shown that execution efficiency of ClustBigFIM algorithm is increased by applying k-means clustering algorithm before BigFIM algorithm as one of the pre-processing technique.
APA, Harvard, Vancouver, ISO, and other styles
49

Wijaya, Andri, Mutia Maharani, and Meilinda. "IMPLEMENTASI PENDEKATAN AGILE UNTUK PENGEMBANGAN OLAP DATA PENJUALAN." ZONAsi: Jurnal Sistem Informasi 6, no. 1 (2024): 222–31. http://dx.doi.org/10.31849/zn.v6i1.17337.

Full text
Abstract:
Sales data in large quantities are difficult to process and report using Microsoft Excel, which takes a long time. The proposed solution is to use Online Analytical Processing, a method that allows for faster decision-making through multidimensional data manipulation. In developing Online Analytical Processing for sales data, an agile approach is applied. With Online Analytical Processing, access to and display of transactional data becomes more efficient, improving analysis quality and supporting management decisions. Research results show that the prototype accelerates sales reports with a response time of 0.0039 seconds. Online Analytical Processing also facilitates decision-making in seconds. User Acceptance Testing results show high software quality and performance (100%), with an overall evaluation of 86,67% categorized as “Very Good”. Keywords: Sales Data, Online Analytical Processing, User Acceptance Testing, Agile Development
APA, Harvard, Vancouver, ISO, and other styles
50

Zhao, Wei, and Xi Ming Sun. "The influence of transactional leadership style on employees' innovation ability." Nurture 18, no. 1 (2023): 139–60. http://dx.doi.org/10.55951/nurture.v18i1.550.

Full text
Abstract:
Purpose: This study delves into two mainland Chinese footwear manufacturing companies to understand the link between transactional leadership and employee creativity, focusing on the mediating role of psychological empowerment. Design/Methodology/Approach: Employing descriptive statistical analysis, reliability analysis, confirmatory factor analysis, correlation analysis, and regression analysis, the study collected 576 valid questionnaires. Data analysis was conducted using SPSS 25 and Analysis of Moment Structures statistical software. Findings: Results confirm that psychological empowerment positively influences employee creativity. Transactional leadership also has a significant positive impact. Notably, psychological empowerment partially mediates the relationship between transactional leadership and creativity. Conclusions: Both transactional leadership and psychological empowerment are key factors in enhancing employee creativity, particularly in the two Chinese private footwear companies studied. Research Limitations: The research focuses on large and medium-sized firms from two Chinese regions, excluding smaller entities. Several potential influencing factors for creativity still need to be addressed. Practical Implications: Business leaders are advised to possess professional, solid skills, ensure fair treatment of employees, and provide appropriate rewards. Such practices can bolster team cohesion, spur innovation, and support sustainable enterprise growth. Contribution to literature: This work underscores the influence of psychological empowerment and transactional leadership on creativity, shedding light on the former's mediating role. The findings enrich the literature and offer a foundation for future research in similar domains.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!