Academic literature on the topic 'Compute as a Commodity (CaaC)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Compute as a Commodity (CaaC).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Compute as a Commodity (CaaC)"

1

Dr.A.Shaji, George. "Democratizing Compute Power: The Rise of Computation as a Commodity and its Impacts." Partners Universal Innovative Research Publication (PUIRP) 02, no. 03 (2024): 57–74. https://doi.org/10.5281/zenodo.11654354.

Full text
Abstract:
This paper investigates the emerging concept of Compute as a Commodity (CaaC), which promises to revolutionize business innovation by providing easy access to vast compute resources, unlocked by cloud computing. CaaC aims to treat compute like electricity or water - conveniently available for consumption on demand. The pay-as-you-go cloud model enables click-button provisioning of processing capacity, without major capital investments. Our research defines CaaC, its objectives of ubiquitous, low-cost compute, and its self-service consumption vision. We analyze the CaaC technical model, which comprises a code/data repository, automated resource discovery, and a dynamic deployment engine. Innovations like spot pricing, provider federation, and deployment automation are highlighted. Numerous CaaC benefits are studied, including heightened business agility from scalable compute, lowered costs from utilizing surplus capacity, and boosted creativity from removing innovation barriers. Despite its advantages, CaaC poses infrastructural intricacies around seamless management across environments. Our work then elucidates CaaC's transformative capacity across verticals like healthcare, banking, media, and retail. For instance, healthcare workloads around genomic sequencing, drug discovery datasets, clinical trial analytics, personalized medicine, and more can leverage CaaC's elastic resources. Financial sectors can tap scalable computing to enable real-time fraud analysis, trade insights, and security evaluations. Media production houses can parallelize rendering and animation via CaaC instead of investing in high-performance computing farms. Further CaaC innovations are expected to be elaborated, like edge computing for reduced latency analytics, quantum computing for tackling complex optimizations, and serverless architectures for simplified access. In conclusion, CaaC represents an important shift in democratizing compute power, unlocking a new wave of innovation by making high-performance computing affordable and accessible. As CaaC matures, widespread adoption can transform businesses, industries, and society by accelerating digital transformation and fueling new data-driven competition. This paper serves as a primer on CaaC capabilities and provides both technological and strategic recommendations for its adoption. Further research can evaluate the societal impacts of democratized computing and guide policy decisions around data regulation, algorithmic accountability, and technology leadership in the age of CaaC.
APA, Harvard, Vancouver, ISO, and other styles
2

Pankaj, Kumar Rai, Pandey Digvijay, and Kumar Pandey Binay. "The Future of Enterprise and Innovation is Compute as a Commodity, or CaaC." Partners Universal International Research Journal (PUIRJ) 03, no. 02 (2024): 89–94. https://doi.org/10.5281/zenodo.11973804.

Full text
Abstract:
A great amount of processing power may now be accessible with only a few clicks of the mouse owing to the advent of cloud computing, which has made this option viable. Cloud computing has made it possible to access this data. "Computer as a commodity" rather than "Computer as a service" is the appropriate manner in which businesses should start addressing its use for the very first time. The method in which companies approach computers for the purposes of conducting research and carrying out commercial operations will undergo a significant transformation as a consequence of this. This is because of the fact that this is the case.  
APA, Harvard, Vancouver, ISO, and other styles
3

Zasada, S. J., D. W. Wright, and P. V. Coveney. "Large-scale binding affinity calculations on commodity compute clouds." Interface Focus 10, no. 6 (2020): 20190133. http://dx.doi.org/10.1098/rsfs.2019.0133.

Full text
Abstract:
In recent years, it has become possible to calculate binding affinities of compounds bound to proteins via rapid, accurate, precise and reproducible free energy calculations. This is imperative in drug discovery as well as personalized medicine. This approach is based on molecular dynamics (MD) simulations and draws on sequence and structural information of the protein and compound concerned. Free energies are determined by ensemble averages of many MD replicas, each of which requires hundreds of cores and/or GPU accelerators, which are now available on commodity cloud computing platforms; there are also requirements for initial model building and subsequent data analysis stages. To automate the process, we have developed a workflow known as the binding affinity calculator. In this paper, we focus on the software infrastructure and interfaces that we have developed to automate the overall workflow and execute it on commodity cloud platforms, in order to reliably predict their binding affinities on time scales relevant to the domains of application, and illustrate its application to two free energy methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Bouhali, Othmane, and Ali Sheharyar. "Distributed rendering of computer-generated images on commodity compute clusters." Qatar Foundation Annual Research Forum Proceedings, no. 2012 (October 2012): CSP16. http://dx.doi.org/10.5339/qfarf.2012.csp16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Park, Chonhyon, Jia Pan, and Dinesh Manocha. "Real-Time Optimization-Based Planning in Dynamic Environments Using GPUs." Proceedings of the International Symposium on Combinatorial Search 3, no. 1 (2021): 168–70. http://dx.doi.org/10.1609/socs.v3i1.18263.

Full text
Abstract:
We present a novel algorithm to compute collision-free trajectories in dynamic environments. Our approach is general and makes no assumption about the obstacles or their motion. We use a replanning framework that interleaves optimization-based planning with execution. Furthermore, we describe a parallel formulation that exploits high number of cores on commodity graphics processors (GPUs) to compute a high-quality path in a given time interval. Overall, we show that search in configuration spaces can be significantly accelerated by using GPU parallelism.
APA, Harvard, Vancouver, ISO, and other styles
6

Gu, Yunhong, and Robert L. Grossman. "Sector and Sphere: the design and implementation of a high-performance data cloud." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 367, no. 1897 (2009): 2429–45. http://dx.doi.org/10.1098/rsta.2009.0053.

Full text
Abstract:
Cloud computing has demonstrated that processing very large datasets over commodity clusters can be done simply, given the right programming model and infrastructure. In this paper, we describe the design and implementation of the Sector storage cloud and the Sphere compute cloud. By contrast with the existing storage and compute clouds, Sector can manage data not only within a data centre, but also across geographically distributed data centres. Similarly, the Sphere compute cloud supports user-defined functions (UDFs) over data both within and across data centres. As a special case, MapReduce-style programming can be implemented in Sphere by using a Map UDF followed by a Reduce UDF. We describe some experimental studies comparing Sector/Sphere and Hadoop using the Terasort benchmark. In these studies, Sector is approximately twice as fast as Hadoop. Sector/Sphere is open source.
APA, Harvard, Vancouver, ISO, and other styles
7

Abbes, Heithem, Franck Butelle, and Christophe Cérin. "Parallelization of Littlewood-Richardson Coefficients Computation and its Integration into the BonjourGrid Meta-Desktop Grid Middleware." International Journal of Grid and High Performance Computing 3, no. 4 (2011): 71–86. http://dx.doi.org/10.4018/jghpc.2011100106.

Full text
Abstract:
This paper shows how to parallelize a compute intensive application in mathematics (Group Theory) for an institutional Desktop Grid platform coordinated by a meta-grid middleware named BonjourGrid. The paper is twofold: it shows how to parallelize a sequential program for a multicore CPU which participates in the computation; and it demonstrates the effort for launching multiple instances of the solutions for the mathematical problem with the BonjourGrid middleware. BonjourGrid is a fully decentralized Desktop Grid middleware. The main results of the paper are: a) an efficient multi-threaded version of a sequential program to compute Littlewood-Richardson coefficients, namely the Multi-LR program and b) a proof of concept, centered around the user needs, for the BonjourGrid middleware dedicated to coordinate multiple instances of programsfor Desktop Grids and with the help of Multi-LR. In this paper, the scientific work consists in starting from a model for the solution of a compute intensive problem in mathematics, to incorporate the concrete model into a middleware and running it on commodity PCs platform managed by an innovative meta Desktop Grid middleware.
APA, Harvard, Vancouver, ISO, and other styles
8

Lamberton, Barbara A. "Baier Building Products, Inc.: Performance Incentives and Variance Analysis in Sales Distribution." Issues in Accounting Education 23, no. 2 (2008): 281–90. http://dx.doi.org/10.2308/iace.2008.23.2.281.

Full text
Abstract:
Faced with price volatility and changes in key personnel responsibilities, a small privately held distributor of commodity building products with limited resources needs to re-evaluate its performance incentives. The company's owners and controller need your help in assessing the extent to which the current compensation scheme has encouraged opportunistic behavior, resulting in large commissions without significant movement toward the company's strategic objectives. By completing this case successfully, you will learn how to develop a balanced scorecard suitable for a small sales distribution business and learn how to compute and interpret marketing variances.
APA, Harvard, Vancouver, ISO, and other styles
9

Crooks, Natacha. "Efficient Data Sharing across Trust Domains." ACM SIGMOD Record 52, no. 2 (2023): 36–37. http://dx.doi.org/10.1145/3615952.3615962.

Full text
Abstract:
Cross-Trust-Domain Processing. Data is now a commodity. We know how to compute and store it efficiently and reliably at scale. We have, however, paid less attention to the notion of trust. Yet, data owners today are no longer the entities storing or processing their data (medical records are stored on the cloud, data is shared across banks, etc.). In fact, distributed systems today consist of many different parties, whether it is cloud providers, jurisdictions, organisations or humans. Modern data processing and storage always straddles trust domains.
APA, Harvard, Vancouver, ISO, and other styles
10

ASAFU-ADJAYE, JOHN, and RENUKA MAHADEVAN. "THE WELFARE EFFECTS OF THE AUSTRALIAN GOODS AND SERVICES TAX." Singapore Economic Review 47, no. 01 (2002): 49–63. http://dx.doi.org/10.1142/s0217590802000407.

Full text
Abstract:
This paper analyses the effect of the Australian goods and services tax. First, we compute the social marginal cost per dollar revenue raised for nine broad commodity groups to determine whether a uniform flat rate is efficient. Second, we evaluate the welfare effects of the tax on the consumption of different income groups. The results indicate that a uniform tax may not be efficient and that the goods and services tax has adversely affected the distribution of purchasing power and thus, there is little scope for using the indirect tax system as a means to redistribute consumption towards the poor.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Compute as a Commodity (CaaC)"

1

Sarabu, Venkata Rahul, Chaithanya Ravulu, Venkata Gummadi, Mridula Dileepraj Kidiyur, Shravan Kumar Joginipalli, and Digvijay Pandey. "Computing's Future." In Embracing the Cloud as a Business Essential. IGI Global, 2025. https://doi.org/10.4018/979-8-3693-9581-3.ch026.

Full text
Abstract:
The commoditization of computing resources, known as Compute as a Commodity (CaaC), is driving a significant shift in cloud computing. This chapter explores CaaC's evolution, from traditional cloud models to a highly flexible, scalable, and cost-efficient approach to computational power. By examining CaaC's key principles, benefits, challenges, and strategic implications, this chapter provides a detailed analysis of how organizations can leverage on-demand compute resources for operational efficiency and innovation. The chapter also covers the future trends in CaaC, including edge computing, artificial intelligence, and quantum computing, highlighting how these advancements are shaping the business landscape. Through real-world case studies and technical insights, this work equips professionals and researchers with a solid understanding of how to adopt and optimize CaaC for sustained growth.
APA, Harvard, Vancouver, ISO, and other styles
2

Abbes, Heithem, Franck Butelle, and Christophe Cérin. "Parallelization of Littlewood-Richardson Coefficients Computation and its Integration into the BonjourGrid Meta-Desktop Grid Middleware." In Applications and Developments in Grid, Cloud, and High Performance Computing. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2065-0.ch013.

Full text
Abstract:
This paper shows how to parallelize a compute intensive application in mathematics (Group Theory) for an institutional Desktop Grid platform coordinated by a meta-grid middleware named BonjourGrid. The paper is twofold: it shows how to parallelize a sequential program for a multicore CPU which participates in the computation; and it demonstrates the effort for launching multiple instances of the solutions for the mathematical problem with the BonjourGrid middleware. BonjourGrid is a fully decentralized Desktop Grid middleware. The main results of the paper are: a) an efficient multi-threaded version of a sequential program to compute Littlewood-Richardson coefficients, namely the Multi-LR program and b) a proof of concept, centered around the user needs, for the BonjourGrid middleware dedicated to coordinate multiple instances of programsfor Desktop Grids and with the help of Multi-LR. In this paper, the scientific work consists in starting from a model for the solution of a compute intensive problem in mathematics, to incorporate the concrete model into a middleware and running it on commodity PCs platform managed by an innovative meta Desktop Grid middleware.
APA, Harvard, Vancouver, ISO, and other styles
3

Sahu, Himanshu, and Ninni Singh. "Software-Defined Storage." In Advances in Systems Analysis, Software Engineering, and High Performance Computing. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-3640-6.ch013.

Full text
Abstract:
SDS along with SDN and software-defined compute (SDC; where in computing is virtualized and software defined) creates software-defined infrastructure (SDI). SDI is the set of three components—SDN, SDS, and SDC—making a new kind of software-defined IT infrastructure where centralization and virtualization are the main focus. SDI is proposed to have infrastructure developed over commodity hardware and software stack defined over it. SDS is exploiting the same concept of decoupling and centralization in reference to storage solutions as in SDN. The SDN works on decoupling the control plane with the data plane from a layer, three switches, or router, and makes a centralized decision point called the controller. The SDS works in a similar way by moving the decision making from the storage hardware to a centralized server. It helps in developing new and existing storage solutions over the commodity storage devices. The centralization helps to create a better dynamic solution for satisfying the customized user need. The solutions are expected to be cheaper due to the use of commodity hardware.
APA, Harvard, Vancouver, ISO, and other styles
4

Reddy, D. Mallikarjuna, and Thandu Vamshi Krishna. "SELF-SIMILAR BEHAVIOUR OF CPI HEADLINE INFLATION." In Futuristic Trends in Contemporary Mathematics & Applications Volume 3 Book 2. Iterative International Publishers, Selfypage Developers Pvt Ltd, 2024. http://dx.doi.org/10.58532/v3bkcm2p7ch4.

Full text
Abstract:
Core inflation has a significant role in monetary policy decisions. Core inflation is determined by removing the temporary price changes and retaining permanent (core) price changes of the headline inflation commodity basket. Forecasting inflation is of vital interest to policymakers. Different methodologies are constructed for measuring core inflation like exclusion based method, symmetric trim method, asymmetric trim method, weighted median method, and moving average process. The inflation, a time series, was estimated by applying the ARIMA model (Iqbal et al., 2016; Habibah et al., 2017). Long memory properties of inflation are studied globally, and ARFIMA models are suggested (Hassler & Wolters, 1995; Baillie & Morano, 2012). In this study, we concentrate on the characteristics of the headline and core inflation rather than the methods to determine the core inflation. Various properties of core inflation are presented in the literature (Marques, 2003). Previous studies reported the presence of LRD behaviour in the inflation of some countries. Inspired by these facts, we examined India's monthly CPI headline inflation to check the self-similarity. Besides that, we try to identify whether the headline inflation series belong to SRD or LRD based on the Hurst exponent. The current study to compute the Hurst index is based on different approaches methods like the R/S method, Variance-Time method, Higuchi’s method and Average Periodogram method. The Hurst parameter estimate gives an idea about the strength of the self-similar nature in CPI headline inflation of India. The chapter presents a criteria for core inflation measures in terms of Hurst exponent. Then core inflation indicators for CPI inflation are selected based on the exclusion approach and examined their Hurst exponents along with other properties of core inflation for the possibility of being a core inflation measure. The present study plays a prominent role in the determination of core inflation in the Indian context. The empirical investigation has been conducted using monthly Combined CPI time series data considering the period: Jan 2012-April 2019 (base year: 2012). The CPI headline Y-o-Y inflation is computed and checked for the self-similarity behaviour by computing Hurst exponent. R programming and MS Excel tools are used for performing the analysis. The remaining work of the chapter has been planned as follows: Section 2.2 presents the overview of the time series concepts. Section 2.3 discusses the preliminaries of self-similarity, its definition and usage in various fields. Section 2.4 offers the literature review in the context of self-similarity applications. Section 2.5 discusses the different methods to compute the Hurst exponent. Section 2.6 presents the numerical results of the Hurst exponent for the CPI headline inflation. Section 2.7 establishes new criteria for core inflation based on the Hurst exponent and applies it to the conventional CPI exclusion measures. Section 2.9 presents the conclusions of the chapter.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Compute as a Commodity (CaaC)"

1

Baba, H., S. Ohshita, T. Hamada, et al. "Novel Analog in-Memory Compute with > 1 nA Current/Cell and 143.9 TOPS/W Enabled by Monolithic Normally-off Zn-rich CAAC-IGZO FET-on-Si CMOS Technology." In 2021 IEEE International Electron Devices Meeting (IEDM). IEEE, 2021. http://dx.doi.org/10.1109/iedm19574.2021.9721312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Meng, Hsien-Yu, Zhenyu Tang, and Dinesh Manocha. "Point-based Acoustic Scattering for Interactive Sound Propagation via Surface Encoding." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/126.

Full text
Abstract:
We present a novel geometric deep learning method to compute the acoustic scattering properties of geometric objects. Our learning algorithm uses a point cloud representation of objects to compute the scattering properties and integrates them with ray tracing for interactive sound propagation in dynamic scenes. We use discrete Laplacian-based surface encoders and approximate the neighborhood of each point using a shared multi-layer perceptron. We show that our formulation is permutation invariant and present a neural network that computes the scattering function using spherical harmonics. Our approach can handle objects with arbitrary topologies and deforming models, and takes less than 1ms per object on a commodity GPU. We have analyzed the accuracy and perform validation on thousands of unseen 3D objects and highlight the benefits over other point-based geometric deep learning methods. To the best of our knowledge, this is the first real-time learning algorithm that can approximate the acoustic scattering properties of arbitrary objects with high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
3

Patel, Chandrakant D., Cullen E. Bash, Ratnesh Sharma, Monem Beitelmal, and Rich Friedrich. "Smart Cooling of Data Centers." In ASME 2003 International Electronic Packaging Technical Conference and Exhibition. ASMEDC, 2003. http://dx.doi.org/10.1115/ipack2003-35059.

Full text
Abstract:
The data center of tomorrow is characterized as one containing a dense aggregation of commodity computing, networking and storage hardware mounted in industry standard racks. In fact, the data center is a computer. The walls of the data center are akin to the walls of the chassis in today’s computer system. The new slim rack mounted systems and blade servers enable reduction in the footprint of today’s data center by 66%. While maximizing computing per unit area, this compaction leads to extremely high power density and high cost associated with removal of the dissipated heat. Today’s approach of cooling the entire data center to a constant temperature sampled at a single location, irrespective of the distributed utilization, is too energy inefficient. We propose a smart cooling system that provides localized cooling when and where needed and works in conjunction with a compute workload allocator to distribute compute workloads in the most energy efficient state. This paper shows a vision and construction of this intelligent data center that uses a combination of modeling, metrology and control to provision the air conditioning resources and workload distribution. A variable cooling system comprising variable capacity computer room air conditioning units, variable air moving devices, adjustable vents, etc. are used to dynamically allocate air conditioning resources where and when needed. A distributed metrology layer is used to sense environment variables like temperature and pressure, and power. The data center energy manager redistributes the compute workloads based on the most energy efficient availability of cooling resources and vice versa. The distributed control layer is no longer associated with any single localized temperature measurement but based on parameters calculated from an aggregation of sensors. The compute resources not in use are put on “standby” thereby providing added savings.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography