Academic literature on the topic 'Scalable Data Architecture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Scalable Data Architecture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Scalable Data Architecture"

1

Bagam, Naveen. "Implementing Scalable Data Architecture for Financial Institutions." Stallion Journal for Multidisciplinary Associated Research Studies 2, no. 3 (2023): 27–40. https://doi.org/10.55544/sjmars.2.3.5.

Full text
Abstract:
The finance sector generates vast volumes of complex data, which require scalable and robust architectures for efficient storage, processing, and analytics. Scalable data architecture is the basis that will make financial institutions competitive, compliant, and innovative in the modern fast-developing digital landscape. This paper addresses the principles, technologies, and methodologies necessary to implement scalable data architecture, keeping in mind high availability, security, and performance optimization as challenges. This paper is geared with real-world examples, technical frameworks, and performance metrics to provide actionable insights on scalability: both for legacy systems and new implementations.
APA, Harvard, Vancouver, ISO, and other styles
2

Venkata Surendra Reddy Appalapuram. "Hybrid data processing architectures: Balancing latency, complexity, and resource utilization in modern data ecosystems." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 1832–41. https://doi.org/10.30574/wjaets.2025.15.2.0750.

Full text
Abstract:
In order to meet the changing needs of contemporary data ecosystems, this article provides a thorough analysis of hybrid data processing architectures that blend batch and streaming paradigms. The content systematically analyzes three prominent architectural patterns: Separate Pipelines with Unified Storage, Lambda Architecture, and Kappa Architecture. Through detailed technical implementation considerations and real-world case studies spanning e-commerce, financial services, and IoT domains, the discussion evaluates how these architectures balance the competing demands of latency, complexity, and resource utilization. Empirical analysis demonstrates that while each architecture offers distinct advantages in specific contexts, successful implementations share common characteristics: unified tooling across batch and streaming workloads, centralized scalable storage, consistent metadata management, reusable transformation logic, and robust processing guarantees. The article concludes with architectural selection guidelines based on use case characteristics and identifies emerging trends in hybrid data processing that will shape future industry practices.
APA, Harvard, Vancouver, ISO, and other styles
3

Bharat Kumar Reddy Kallem. "Building a scalable enterprise data architecture for financial institutions." World Journal of Advanced Engineering Technology and Sciences 15, no. 1 (2025): 1153–57. https://doi.org/10.30574/wjaets.2025.15.1.0249.

Full text
Abstract:
Enterprise data architecture for financial institutions has evolved dramatically to address the exponential growth of financial data, which now exceeds 2.5 exabytes daily with a 40% annual growth rate. Traditional infrastructures struggle to meet modern operational demands, with a significant majority of institutions reporting scaling challenges. The shift toward real-time processing requirements compounds these difficulties as banking systems process billions of transactions daily while investment platforms handle hundreds of thousands of market data messages per second during volatility events. Modern architectural approaches include multi-tiered storage systems, domain-oriented data meshes, cloud-native deployments, and comprehensive governance frameworks that deliver substantial improvements across performance, integration, scalability, and security dimensions. Organizations implementing these advanced architectures experience dramatic reductions in processing latency, significant improvements in cross-domain analytics, enhanced deployment frequency, and strengthened security postures. These architectural transformations yield measurable business outcomes, including improved customer satisfaction, enhanced risk detection capabilities, reduced infrastructure costs, and accelerated time-to-market for financial products and services
APA, Harvard, Vancouver, ISO, and other styles
4

Ram Rajendiran, Gautham. "Scalable Data Platform Architecture for Highly Variable e-Commerce Workloads." International Journal of Science and Research (IJSR) 9, no. 5 (2020): 1895–902. http://dx.doi.org/10.21275/sr24923122459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Urkudkar, Chetan. "Building Scalable ETL Pipelines for HR Data." American Journal of Engineering and Technology 07, no. 06 (2025): 88–95. https://doi.org/10.37547/tajet/volume07issue06-09.

Full text
Abstract:
The article is devoted to the development and experimental validation of scalable ETL pipelines for HR data, aimed at bridging the gap between the volume of heterogeneous workforce events and the capabilities of traditional nightly processes. The relevance of the study is determined by the exponential growth of the HR technology market to USD 40.45 billion in 2024 and its forecasted doubling by 2032 at a 9.2% CAGR, as well as by the fragmentation of corporate systems, which leads to data incompleteness, inconsistency, and latency in turnover metrics and talent-development program effectiveness analysis. The work is aimed at formalizing requirements for Extraction, Transformation, Loading, Scalability, and Observability; at designing a containerized architecture based on Kubernetes, Apache Airflow, Spark, and Flink-CDC; and to ensure low latency, exactly-once semantics as well as linear scaling up to 32 worker pods with an efficiency η of 0.78 or greater. The novelty of the work lies in the first formal model that integrates adaptive API-request throttling with idempotent SCD-attribute transformations for a hybrid Iceberg/Snowflake storage layer and a complete observability system using Prometheus and OpenTelemetry with real-time alerts. An experimental evaluation on a private Kubernetes cluster under load up to 10⁸ records per day demonstrated end-to-end latency ≤ 15 min in batch mode and p95 latency reduction to 48s in near-real-time mode, throughput up to 18.7k records/min with linear worker scaling (η = 0.82), and full lineage-graph traceability in compliance with GDPR. The main conclusions confirm that the proposed architecture provides reliable and reproducible HR-data integration with minimal latency and predictable cost, paving the way for practical deployment in large enterprises. This article will be helpful to data engineers, cloud-architecture designers, and project managers in HR analytics automation.
APA, Harvard, Vancouver, ISO, and other styles
6

Venkata, Gummadi. "Designing a Scalable Architecture for Customer Data Engineering Platform on Cloud Infrastructure: A Comprehensive Framework." Journal of Scientific and Engineering Research 10, no. 12 (2023): 243–51. https://doi.org/10.5281/zenodo.14012383.

Full text
Abstract:
The exponential growth of customer data in modern enterprises has created unprecedented challenges in data engineering, necessitating architectures capable of handling petabyte-scale processing while maintaining real-time analytics capabilities. This paper presents a comprehensive architectural framework for designing and implementing scalable customer data engineering platforms utilizing cloud infrastructure. The proposed architecture addresses critical challenges including real-time data processing, horizontal scalability, data governance, and security considerations. Through rigorous experimental validation and performance analysis conducted over a six-month period across three geographic regions, we demonstrate that the proposed architecture achieves a 40% reduction in data processing latency, 99.99% system availability, and 65% improvement in resource utilization compared to traditional architectures. The framework introduces novel approaches to data partitioning, processing optimization, and automated scaling, while maintaining cost-effectiveness. Our results indicate significant improvements in key performance indicators, including a 75% reduction in data retrieval time and 99.999% data durability. This research contributes to the field by providing a validated framework for building enterprise-scale data platforms that can adapt to evolving business requirements while maintaining operational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
7

Singh, Mantu. "Implementing Service Mesh Architecture for Scalable Applications." American Journal of Engineering and Technology 7, no. 4 (2025): 157–65. https://doi.org/10.37547/tajet/volume07issue04-21.

Full text
Abstract:
This study examines a decentralized approach to implementing a service mesh for microservice-based systems designed for scalable data processing. Unlike traditional solutions dominated by the pipes-and-filters pattern and a centralized control plane, this approach utilizes the concept of Eblocks—unified modules that incorporate service discovery, authentication, monitoring, and load management components. This allows for the formation of various patterns (manager-worker, divide-and-conquer, hybrid models) directly at the microservice level without strict dependence on centralized logic. It is demonstrated that such an architecture accelerates data processing through automatic scaling and parallel execution, simplifies configuration, and provides flexible security and observability mechanisms. The proposed results, supported by findings from other researchers, indicate a significant increase in system throughput when handling documents requiring pipeline, parallel, and distributed processing. The presented information is of interest to researchers and professionals in distributed systems, cloud computing, and microservice architecture, aiming for a deeper understanding and implementation of innovative service mesh architectures to enhance the scalability, reliability, and efficiency of modern IT applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Sumit Kumar Agrawal and Dr T. Aswini. "Multi-Tenant Low Latency Scalable Architectures for Large-Scale Customer Data Processing." International Journal for Research Publication and Seminar 16, no. 1 (2025): 174–94. https://doi.org/10.36676/jrps.v16.i1.41.

Full text
Abstract:
In the era of big data, organizations are increasingly managing large volumes of customer data that need to be processed efficiently and scalably. Multi-tenant architectures provide an effective solution for such demands, especially when managing and processing data from multiple clients on a shared infrastructure. This paper explores the design and implementation of low-latency, scalable, multi-tenant architectures for large-scale customer data processing. By leveraging serverless computing, containerization, and distributed computing models, the architecture can dynamically scale according to the load while maintaining low latency, even during peak usage. Additionally, the paper highlights various key challenges such as data isolation, resource contention, and service degradation in multi-tenant environments, proposing strategies to mitigate these issues. The proposed system integrates cutting-edge technologies such as microservices, auto-scaling, and event-driven models to deliver both cost-effectiveness and high performance. This work aims to provide a comprehensive framework for developing and deploying robust, efficient, and scalable architectures capable of processing large-scale customer data in multi-tenant environments.
APA, Harvard, Vancouver, ISO, and other styles
9

Al-Fares, Mohammad, Alexander Loukissas, and Amin Vahdat. "A scalable, commodity data center network architecture." ACM SIGCOMM Computer Communication Review 38, no. 4 (2008): 63–74. http://dx.doi.org/10.1145/1402946.1402967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cheruku, Saketh Reddy, Shalu Jain, and Anshika Aggarwal. "Building Scalable Data Warehouses: Best Practices and Case Studies." Darpan International Research Analysis 12, no. 1 (2024): 80–99. http://dx.doi.org/10.36676/dira.v12.i1.87.

Full text
Abstract:
In today's data-driven world, the ability to manage, store, and analyze large volumes of data is crucial for business success. The demand for scalable data warehouses has risen dramatically as organizations seek to handle the explosion of data generated by modern applications and digital transactions. "Building Scalable Data Warehouses: Best Practices and Case Studies" explores the key strategies, methodologies, and technologies involved in designing and implementing scalable data warehouses that meet the demands of today and the future. The paper highlights the importance of architecture choices, data modeling techniques, and performance optimization in creating data warehouses that can grow with an organization’s needs. Additionally, it provides case studies that demonstrate the real-world application of these principles in various industries, showing how scalable data warehouses have enabled companies to maintain high performance, reduce costs, and enhance decision-making capabilities.The paper begins by defining what constitutes a scalable data warehouse, emphasizing the importance of a flexible and adaptive architecture that can accommodate growing data volumes and changing business requirements. It explores different architectural approaches, including the benefits and challenges of traditional on-premises data warehouses versus cloud-based solutions.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Scalable Data Architecture"

1

Mehta, Dhananjay. "Building a scalable distributed data platform using lambda architecture." Kansas State University, 2017. http://hdl.handle.net/2097/35403.

Full text
Abstract:
Master of Science<br>Department of Computer Science<br>William H. Hsu<br>Data is generated all the time over Internet, systems sensors and mobile devices around us this is often referred to as ‘big data’. Tapping this data is a challenge to organizations because of the nature of data i.e. velocity, volume and variety. What make handling this data a challenge? This is because traditional data platforms have been built around relational database management systems coupled with enterprise data warehouses. Legacy infrastructure is either technically incapable to scale to big data or financially infeasible. Now the question arises, how to build a system to handle the challenges of big data and cater needs of an organization? The answer is Lambda Architecture. Lambda Architecture (LA) is a generic term that is used for scalable and fault-tolerant data processing architecture that ensures real-time processing with low latency. LA provides a general strategy to knit together all necessary tools for building a data pipeline for real-time processing of big data. LA comprise of three layers – Batch Layer, responsible for bulk data processing, Speed Layer, responsible for real-time processing of data streams and Service Layer, responsible for serving queries from end users. This project draw analogy between modern data platforms and traditional supply chain management to lay down principles for building a big data platform and show how major challenges with building a data platforms can be mitigated. This project constructs an end to end data pipeline for ingestion, organization, and processing of data and demonstrates how any organization can build a low cost distributed data platform using Lambda Architecture.
APA, Harvard, Vancouver, ISO, and other styles
2

Ward, Michael Patrick. "A transputer based scalable data acquisition system." Thesis, University of Liverpool, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Ke, and University of Lethbridge Faculty of Arts and Science. "A reconfigurable and scalable efficient architecture for AES." Thesis, Lethbridge, Alta. : University of Lethbridge, Deptartment of Mathematics and Computer Science, 2008, 2008. http://hdl.handle.net/10133/778.

Full text
Abstract:
A new 32-bit reconfigurable FPGA implementation of AES algorithm is presented in this thesis. It employs a single round architecture to minimize the hardware cost. The combinational logic implementation of S-Box ensures the suitability for non-Block RAMs (BRAMs) FPGA devices. Fully composite field GF((24)2) based encryption and keyschedule lead to the lower hardware complexity and convenience for the efficient subpipelining. For the first time, a subpipelined on-the-fly keyschedule over composite field GF((24)2) is applied for the all standard key sizes (128-, 192-, 256-bit). The proposed architecture achieves a throughput of 805.82Mbits/s using 523 slices with a ratio throughput/slice of 1.54Mbps/Slice on Xilinx Virtex2 XC2V2000 ff896 device.<br>ix, 77 leaves : ill. ; 29 cm.
APA, Harvard, Vancouver, ISO, and other styles
4

de, la Rúa Martínez Javier. "Scalable Architecture for Automating Machine Learning Model Monitoring." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280345.

Full text
Abstract:
Last years, due to the advent of more sophisticated tools for exploratory data analysis, data management, Machine Learning (ML) model training and model serving into production, the concept of MLOps has gained more popularity. As an effort to bring DevOps processes to the ML lifecycle, MLOps aims at more automation in the execution of diverse and repetitive tasks along the cycle and at smoother interoperability between teams and tools involved. In this context, the main cloud providers have built their own ML platforms [4, 34, 61], offered as services in their cloud solutions. Moreover, multiple frameworks have emerged to solve concrete problems such as data testing, data labelling, distributed training or prediction interpretability, and new monitoring approaches have been proposed [32, 33, 65]. Among all the stages in the ML lifecycle, one of the most commonly overlooked although relevant is model monitoring. Recently, cloud providers have presented their own tools to use within their platforms [4, 61] while work is ongoing to integrate existent frameworks [72] into open-source model serving solutions [38]. Most of these frameworks are either built as an extension of an existent platform (i.e lack portability), follow a scheduled batch processing approach at a minimum rate of hours, or present limitations for certain outliers and drift algorithms due to the platform architecture design in which they are integrated. In this work, a scalable automated cloudnative architecture is designed and evaluated for ML model monitoring in a streaming approach. An experimentation conducted on a 7-node cluster with 250.000 requests at different concurrency rates shows maximum latencies of 5.9, 29.92 and 30.86 seconds after request time for 75% of distance-based outliers detection, windowed statistics and distribution-based data drift detection, respectively, using windows of 15 seconds length and 6 seconds of watermark delay.<br>Under de senaste åren har konceptet MLOps blivit alltmer populärt på grund av tillkomsten av mer sofistikerade verktyg för explorativ dataanalys, datahantering, modell-träning och model serving som tjänstgör i produktion. Som ett försök att föra DevOps processer till Machine Learning (ML)-livscykeln, siktar MLOps på mer automatisering i utförandet av mångfaldiga och repetitiva uppgifter längs cykeln samt på smidigare interoperabilitet mellan team och verktyg inblandade. I det här sammanhanget har de största molnleverantörerna byggt sina egna ML-plattformar [4, 34, 61], vilka erbjuds som tjänster i deras molnlösningar. Dessutom har flera ramar tagits fram för att lösa konkreta problem såsom datatestning, datamärkning, distribuerad träning eller tolkning av förutsägelse, och nya övervakningsmetoder har föreslagits [32, 33, 65]. Av alla stadier i ML-livscykeln förbises ofta modellövervakning trots att det är relevant. På senare tid har molnleverantörer presenterat sina egna verktyg att kunna användas inom sina plattformar [4, 61] medan arbetet pågår för att integrera befintliga ramverk [72] med lösningar för modellplatformer med öppen källkod [38]. De flesta av dessa ramverk är antingen byggda som ett tillägg till en befintlig plattform (dvs. saknar portabilitet), följer en schemalagd batchbearbetningsmetod med en lägsta hastighet av ett antal timmar, eller innebär begränsningar för vissa extremvärden och drivalgoritmer på grund av plattformsarkitekturens design där de är integrerade. I det här arbetet utformas och utvärderas en skalbar automatiserad molnbaserad arkitektur för MLmodellövervakning i en streaming-metod. Ett experiment som utförts på ett 7nodskluster med 250.000 förfrågningar vid olika samtidigheter visar maximala latenser på 5,9, 29,92 respektive 30,86 sekunder efter tid för förfrågningen för 75% av avståndsbaserad detektering av extremvärden, windowed statistics och distributionsbaserad datadriftdetektering, med hjälp av windows med 15 sekunders längd och 6 sekunders fördröjning av vattenstämpel.
APA, Harvard, Vancouver, ISO, and other styles
5

Fero, Allison. "A scalable architecture for the interconnection of microgrids." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/115007.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2017.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 789-84).<br>Electrification is a global challenge that is especially acute in India, where about one fifth of the population has no access to electricity. Solar powered microgrid technology is a viable central grid alternative in the electrification of India, especially in remote areas where grid extension is cost prohibitive. However, the upfront costs of microgrid development, coupled with inadequate financing, have led to the implementation of small scale, stand alone systems. Thus, the costs of local generation and storage are a substantial barrier to acquisition of the technology. Furthermore, the issues of uncertainty, intermittency, and variability of renewable generation are daunting in small microgrids due to lack of aggregation. In this work, a methodology is provided that maximizes system-wide reliability through the design of a computationally scalable communication and control architecture for the interconnection of microgrids. An optimization based control system is proposed that finds optimal load scheduling and energy sharing decisions subject to system dynamics, power balance constraints, and congestion constraints, while maximizing network-wide reliability. The model is first formulated as a centralized optimization problem, and the value of interconnection is assessed using supply and demand data gathered in India. The model is then formulated as a layered decomposition, in which local scheduling optimization occurs at each microgrid, requiring only nearest neighbor communication to ensure feasibility of the solutions. Finally, a methodology is proposed to generate distributed optimal policies for a network of Linear Quadratic Regulators that are each making decisions coupled by network flow constraints. The LQR solution is combined with network flow dual decomposition to generate a fully decomposed algorithm for finding the dynamic programming solution of the LQR subject to network flow constraints.<br>by Allison Fero.<br>S.M. in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
6

Mühll, Johann Rudolf Vonder. "Concept and implementation of a scalable architecture for data-parallel computing /." [S.l.] : [s.n.], 1996. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=11787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gupta, Pankaj. "RESOURCE-CONSTRAINT AND SCALABLE DATA DISTRIBUTION MANAGEMENT FOR HIGH LEVEL ARCHITECTURE." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4345.

Full text
Abstract:
In this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six components in HLA that is responsible for limiting and controlling the data exchanged in a simulation and reducing the processing requirements of federates. DDM is also an important problem in the parallel and distributed computing domain, especially in large-scale distributed modeling and simulation applications, where control on data exchange among the simulated entities is required. We present a performance-evaluation simulation study of the P-Pruning algorithm against three techniques: region-matching, fixed-grid, and dynamic-grid DDM algorithms. The P-Pruning algorithm is faster than region-matching, fixed-grid, and dynamic-grid DDM algorithms as it avoid the quadratic computation step involved in other algorithms. The simulation results show that the P-Pruning DDM algorithm uses memory at run-time more efficiently and requires less number of multicast groups as compared to the three algorithms. To increase the scalability of P-Pruning algorithm, we develop a resource-efficient enhancement for the P-Pruning algorithm. We also present a performance evaluation study of this resource-efficient algorithm in a memory-constraint environment. The Memory-Constraint P-Pruning algorithm deploys I/O efficient data-structures for optimized memory access at run-time. The simulation results show that the Memory-Constraint P-Pruning DDM algorithm is faster than the P-Pruning algorithm and utilizes memory at run-time more efficiently. It is suitable for high performance distributed simulation applications as it improves the scalability of the P-Pruning algorithm by several order in terms of number of federates. We analyze the computation complexity of the P-Pruning algorithm using average-case analysis. We have also extended the P-Pruning algorithm to three-dimensional routing space. In addition, we present the P-Pruning algorithm for dynamic conditions where the distribution of federated is changing at run-time. The dynamic P-Pruning algorithm investigates the changes among federates regions and rebuilds all the affected multicast groups. We have also integrated the P-Pruning algorithm with FDK, an implementation of the HLA architecture. The integration involves the design and implementation of the communicator module for mapping federate interest regions. We provide a modular overview of P-Pruning algorithm components and describe the functional flow for creating multicast groups during simulation. We investigate the deficiencies in DDM implementation under FDK and suggest an approach to overcome them using P-Pruning algorithm. We have enhanced FDK from its existing HLA 1.3 specification by using IEEE 1516 standard for DDM implementation. We provide the system setup instructions and communication routines for running the integrated on a network of machines. We also describe implementation details involved in integration of P-Pruning algorithm with FDK and provide results of our experiences.<br>Ph.D.<br>School of Electrical Engineering and Computer Science<br>Engineering and Computer Science<br>Computer Science PhD
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Tianqi. "An architecture to support scalable distributed virtual environment systems on grid." Click to view the E-thesis via HKUTO, 2004. http://sunzi.lib.hku.hk/hkuto/record/B31473374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Olofsson, Louise. "Sustainable and scalable testing strategy in a multilayered service-based architecture." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-80134.

Full text
Abstract:
This thesis examines and evaluates whether it is possible to measure the quality of a software project and introduces a metric that will evaluate if the quality of every test performed when developing software can be measured. This subject is examined because it can be hard to conclude how well a project and all its parts performed, both during implementation and after it is done. To facilitate this need, this thesis provides a possible solution.   To try and answer these questions and meet the needs of this tool a prototype has been developed. The prototype is automated and runs through all the software development tests selected for this project. It sums up the test results and later translates them with the help of a metric to tell its quality grade. The metric is calculated with the help of an arbitrary formula developed for this thesis. Once the metric is concluded the development team working with the project will have an overview of how well each test area is performing and how well the project's end result was. With the help of this metric it is also easier to see if the quality achieved meets the company’s standards and the customer’s wishes. The prototype aims to be sustainable because the solution should last for a long term and also because sustainability means a smoother and more efficient way for developers and other people involved to work with the prototype since not much extra work will be required when updates need to be implemented or other necessary implementations.   The prototype is applied on a second project, which is larger and more advanced than the project created for this thesis, to get a better and accurate understanding if the implementation is correct and if the metric can be used as a value to describe a project. The metric results are compared and evaluated. The results of this thesis conclude a proof-of-concept and can be seen as a first step in a longer evaluation and process in determining the quality of tests. The results conclusion is that more parameters and more weighing of each tests importance are needed in order to achieve a reliable metric result.   This tool is meant to ease and help developers to quickly come to a conclusion about how good the work is. It could also be beneficial for a company with focus on web development and IT-solutions though it will be easier to follow and set a standard for the services they provide.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Tianqi, and 王天琦. "An architecture to support scalable distributed virtual environment systems on grid." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B31473374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Scalable Data Architecture"

1

Azarmi, Bahaaldine. Scalable Big Data Architecture. Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-1326-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Giordano, Anthony. Data integration blueprint and modeling: Techniques for a scalable and sustainable architecture. IBM Press Pearson, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mühll, Johann Rudolf Vonder. Concept and implementation of a scalable architecture for data-parallel computing. Hartung-Gorre, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

IEEE Computer Society. Microprocessor and Microcomputer Standards Committee. and Institute of Electrical and Electronics Engineers., eds. IEEE standard for shared-data formats optimized for scalable coherent interface (SCI) processors. Institute of Electrical and Electronics Engineers, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Scalable Big Data Architecture: A practitioners guide to choosing relevant Big Data architecture. Apress, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Banerjee, Sinchan. Scalable Data Architecture with Java: Build Efficient Enterprise-Grade Data Architecting Solutions Using Java. Packt Publishing, Limited, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oertli, Erwin Ewald. Switcherland - a scalable computer architecture with support for continuous data types. 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mühll, Johann Rudolf Vonder. Concept and implementation of a scalable architecture for data-parallel computing. 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Institute Of Electrical and Electronics Engineers. IEEE Standard for Shared-Data Formats Optimized for Scalable Coherent Interfaces (Sci).......... (IEEE Standard for Shared-Data Formats Optimized for Scalable). Institute of Electrical & Electronics Enginee, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Giordano, Anthony David. Data Integration Blueprint and Modeling: Techniques for a Scalable and Sustainable Architecture. Pearson Higher Education & Professional Group, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Scalable Data Architecture"

1

Azarmi, Bahaaldine. "Streaming Data." In Scalable Big Data Architecture. Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-1326-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Azarmi, Bahaaldine. "The Big (Data) Problem." In Scalable Big Data Architecture. Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-1326-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Azarmi, Bahaaldine. "Learning From Your Data?" In Scalable Big Data Architecture. Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-1326-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Azarmi, Bahaaldine. "Early Big Data with NoSQL." In Scalable Big Data Architecture. Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-1326-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Azarmi, Bahaaldine. "Defining the Processing Topology." In Scalable Big Data Architecture. Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-1326-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Azarmi, Bahaaldine. "Querying and Analyzing Patterns." In Scalable Big Data Architecture. Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-1326-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Azarmi, Bahaaldine. "Governance Considerations." In Scalable Big Data Architecture. Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-1326-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Goodell, Geoff, D. R. Toliver, and Hazem Danny Nakib. "A Scalable Architecture for Electronic Payments." In Financial Cryptography and Data Security. FC 2022 International Workshops. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-32415-4_38.

Full text
Abstract:
AbstractWe present a scalable architecture for electronic retail payments via central bank digital currency and offer a solution to the perceived conflict between robust regulatory oversight and consumer affordances such as privacy and control. Our architecture combines existing work in payment systems and digital currency with a new approach to digital asset design for managing unforgeable, stateful, and oblivious assets without relying on either a central authority or a monolithic consensus system. Regulated financial institutions have a role in every transaction, and the consumer affordances are achieved through the use of non-custodial wallets that unlink the sender from the recipient in the transaction channel. This approach is fully compatible with the existing two-tiered banking system and can complement and extend the roles of existing money services businesses and asset custodians.
APA, Harvard, Vancouver, ISO, and other styles
9

Che Ani, Zhamri, Fauziah Baharom, Haslina Mohd, Yuhanis Yusof, and Mohamed Ali Saip. "Scalable Big Data Architecture: Improving Data Management at MoHE." In Information Systems Engineering and Management. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-91485-0_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Truică, Ciprian-Octavian, Jérôme Darmont, and Julien Velcin. "A Scalable Document-Based Architecture for Text Analysis." In Advanced Data Mining and Applications. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-49586-6_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Scalable Data Architecture"

1

Hviid, Jakob, Anders Launer Bæk-Petersen, Emil Stubbe Kolvig-Raun, and Juan Marín-Vega. "AI Pipelines: A Scalable Architecture for Dynamic Data Processing." In 2025 IEEE 22nd International Conference on Software Architecture Companion (ICSA-C). IEEE, 2025. https://doi.org/10.1109/icsa-c65153.2025.00020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Beňo, Lukáš, Erik Kučera, Oto Haffner, Martin Mečír, Martin Pajpach, and Dominik Janecky. "Transforming IoT with Scalable Serverless Architecture for Enhanced Data Collection." In 2024 International Conference on Computational Intelligence and Network Systems (CINS). IEEE, 2024. https://doi.org/10.1109/cins63881.2024.10864436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gautam, Shivam, Deeksha Goplani, Darshan Patel, et al. "Maximizing Multi-Core Efficiency in BLAS: A Scalable Architecture for Performance." In 2024 IEEE 31st International Conference on High Performance Computing, Data and Analytics Workshop (HiPCW). IEEE, 2024. https://doi.org/10.1109/hipcw63042.2024.00086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

G, Saranya, Kumaran K, Pratheep V, and Dhanush S. R. "Developing Scalable and Fault Tolerant Distributed Architecture for Big Unstructured Data Systems." In 2025 6th International Conference on Mobile Computing and Sustainable Informatics (ICMCSI). IEEE, 2025. https://doi.org/10.1109/icmcsi64620.2025.10883441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Fongci, Patrick Young, Huan He, et al. "Kamino: A Scalable Architecture to Support Medical AI Research Using Large Real World Data." In 2024 IEEE 12th International Conference on Healthcare Informatics (ICHI). IEEE, 2024. http://dx.doi.org/10.1109/ichi61247.2024.00072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Owaki, Koichi, Matsuki Yamamoto, Nattaon Techasarntikul, et al. "Proposal of a Scalable Building Operating System Architecture and Data Model Toward Software-Defined Building." In 2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC). IEEE, 2024. http://dx.doi.org/10.1109/compsac61105.2024.00249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Yue, Ran Liang, and JIqiu Xu. "A Scalable and Extensible Blockchain Architecture." In 2018 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2018. http://dx.doi.org/10.1109/icdmw.2018.00030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Falsafi, Babak. "Towards energy-scalable data centers." In 2010 15th CSI International Symposium on Computer Architecture and Digital Systems (CADS). IEEE, 2010. http://dx.doi.org/10.1109/cads.2010.5623534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Al-Fares, Mohammad, Alexander Loukissas, and Amin Vahdat. "A scalable, commodity data center network architecture." In the ACM SIGCOMM 2008 conference. ACM Press, 2008. http://dx.doi.org/10.1145/1402958.1402967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Olsen, Bryan, John R. Johnson, and Terence Critchlow. "Data intensive architecture for scalable cyber analytics." In 2011 IEEE International Conference on Technologies for Homeland Security (HST). IEEE, 2011. http://dx.doi.org/10.1109/ths.2011.6107901.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Scalable Data Architecture"

1

Decker, Jonathan, Alex Godwin, Mark Livingston, and Denise Royle. A Scalable Architecture for Visual Data Exploration. Defense Technical Information Center, 2009. http://dx.doi.org/10.21236/ada623684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhuang, Shelley Q. Bayeux: An Architecture for Scalable and Fault-tolerant Wide-area Data Dissemination. Defense Technical Information Center, 2002. http://dx.doi.org/10.21236/ada603200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pasupuleti, Murali Krishna. Scalable Quantum Networks: Entanglement-Driven Secure Communication. National Education Services, 2025. https://doi.org/10.62311/nesx/rrvi525.

Full text
Abstract:
Abstract: Scalable quantum networks, powered by entanglement-driven secure communication, are poised to revolutionize global information exchange, cybersecurity, and quantum computing infrastructures. Unlike classical communication systems, quantum networks leverage quantum entanglement and superposition to enable ultra-secure data transmission, quantum key distribution (QKD), and instantaneous information sharing across large-scale networks. This research explores the fundamental principles of entanglement-based communication, the role of quantum repeaters, quantum memory, and multi-nodal entanglement distribution in overcoming photon loss, decoherence, and distance limitations in quantum networks. Additionally, it examines the hybrid integration of quantum-classical networking architectures, real-world experimental implementations such as satellite-based quantum communication and metropolitan-scale quantum cryptography, and the scalability challenges related to hardware, error correction, and network synchronization. The study also addresses post-quantum cryptography, quantum-resistant algorithms, and cybersecurity vulnerabilities in quantum communication, offering a comprehensive roadmap for the development of secure, scalable, and globally interconnected quantum networks. Keywords: Scalable quantum networks, quantum entanglement, entanglement distribution, quantum key distribution (QKD), secure communication, quantum repeaters, quantum memory, photon loss mitigation, quantum cryptography, post-quantum security, hybrid quantum-classical networks, metropolitan-scale quantum networks, satellite-based quantum communication, quantum internet, quantum coherence, quantum error correction, quantum teleportation, multi-nodal quantum entanglement, cybersecurity in quantum networks, quantum-resistant algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Pasupuleti, Murali Krishna. Neural Computation and Learning Theory: Expressivity, Dynamics, and Biologically Inspired AI. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv425.

Full text
Abstract:
Abstract: Neural computation and learning theory provide the foundational principles for understanding how artificial and biological neural networks encode, process, and learn from data. This research explores expressivity, computational dynamics, and biologically inspired AI, focusing on theoretical expressivity limits, infinite-width neural networks, recurrent and spiking neural networks, attractor models, and synaptic plasticity. The study investigates mathematical models of function approximation, kernel methods, dynamical systems, and stability properties to assess the generalization capabilities of deep learning architectures. Additionally, it explores biologically plausible learning mechanisms such as Hebbian learning, spike-timing-dependent plasticity (STDP), and neuromodulation, drawing insights from neuroscience and cognitive computing. The role of spiking neural networks (SNNs) and neuromorphic computing in low-power AI and real-time decision-making is also analyzed, with applications in robotics, brain-computer interfaces, edge AI, and cognitive computing. Case studies highlight the industrial adoption of biologically inspired AI, focusing on adaptive neural controllers, neuromorphic vision, and memory-based architectures. This research underscores the importance of integrating theoretical learning principles with biologically motivated AI models to develop more interpretable, generalizable, and scalable intelligent systems. Keywords Neural computation, learning theory, expressivity, deep learning, recurrent neural networks, spiking neural networks, biologically inspired AI, infinite-width networks, kernel methods, attractor networks, synaptic plasticity, STDP, neuromodulation, cognitive computing, dynamical systems, function approximation, generalization, AI stability, neuromorphic computing, robotics, brain-computer interfaces, edge AI, biologically plausible learning.
APA, Harvard, Vancouver, ISO, and other styles
5

Pasupuleti, Murali Krishna. 2D Quantum Materials for Next-Gen Semiconductor Innovation. National Education Services, 2025. https://doi.org/10.62311/nesx/rrvi425.

Full text
Abstract:
Abstract The emergence of two-dimensional (2D) quantum materials is revolutionizing next-generation semiconductor technology, offering superior electronic, optical, and quantum properties compared to traditional silicon-based materials. 2D materials, such as graphene, transition metal dichalcogenides (TMDs), hexagonal boron nitride (hBN), and black phosphorus, exhibit high carrier mobility, tunable bandgaps, exceptional mechanical flexibility, and strong light-matter interactions, making them ideal candidates for ultra-fast transistors, spintronics, optoelectronic devices, and quantum computing applications. This research explores the fundamental properties, synthesis techniques, and integration challenges of 2D quantum materials in semiconductor innovation, focusing on scalability, stability, and industrial feasibility. Additionally, it examines emerging applications in energy-efficient electronics, flexible and wearable technologies, and photonic integration for high-speed data processing. The study also addresses current challenges, including material defect engineering, large-area fabrication, and quantum coherence preservation, providing a comprehensive roadmap for the adoption of 2D quantum materials in next-generation semiconductor architectures. Keywords 2D quantum materials, next-generation semiconductors, graphene electronics, transition metal dichalcogenides (TMDs), quantum transport, high carrier mobility, spintronics, optoelectronics, quantum computing, flexible electronics, nanoelectronics, photonic integration, tunable bandgap materials, electronic properties of 2D materials, synthesis of 2D semiconductors, scalable semiconductor fabrication, quantum coherence, energy-efficient transistors, van der Waals heterostructures, semiconductor innovation.
APA, Harvard, Vancouver, ISO, and other styles
6

Pasupuleti, Murali Krishna. Quantum-Enhanced Machine Learning: Harnessing Quantum Computing for Next-Generation AI Systems. National Education Services, 2025. https://doi.org/10.62311/nesx/rrv125.

Full text
Abstract:
Abstract Quantum-enhanced machine learning (QML) represents a paradigm shift in artificial intelligence by integrating quantum computing principles to solve complex computational problems more efficiently than classical methods. By leveraging quantum superposition, entanglement, and parallelism, QML has the potential to accelerate deep learning training, optimize combinatorial problems, and enhance feature selection in high-dimensional spaces. This research explores foundational quantum computing concepts relevant to AI, including quantum circuits, variational quantum algorithms, and quantum kernel methods, while analyzing their impact on neural networks, generative models, and reinforcement learning. Hybrid quantum-classical AI architectures, which combine quantum subroutines with classical deep learning models, are examined for their ability to provide computational advantages in optimization and large-scale data processing. Despite the promise of quantum AI, challenges such as qubit noise, error correction, and hardware scalability remain barriers to full-scale implementation. This study provides an in-depth evaluation of quantum-enhanced AI, highlighting existing applications, ongoing research, and future directions in quantum deep learning, autonomous systems, and scientific computing. The findings contribute to the development of scalable quantum machine learning frameworks, offering novel solutions for next-generation AI systems across finance, healthcare, cybersecurity, and robotics. Keywords Quantum machine learning, quantum computing, artificial intelligence, quantum neural networks, quantum kernel methods, hybrid quantum-classical AI, variational quantum algorithms, quantum generative models, reinforcement learning, quantum optimization, quantum advantage, deep learning, quantum circuits, quantum-enhanced AI, quantum deep learning, error correction, quantum-inspired algorithms, quantum annealing, probabilistic computing.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!