Academic literature on the topic 'Cluster computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cluster computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Cluster computing"

1

Buyya, Rajkumar, Hai Jin, and Toni Cortes. "Cluster computing." Future Generation Computer Systems 18, no. 3 (2002): v—viii. http://dx.doi.org/10.1016/s0167-739x(01)00053-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nwobodo, Ikechukwu. "Cloud Computing: A Detailed Relationship to Grid and Cluster Computing." International Journal of Future Computer and Communication 4, no. 2 (2015): 82–87. http://dx.doi.org/10.7763/ijfcc.2015.v4.361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

ROSENBERG, ARNOLD L., and RON C. CHIANG. "HETEROGENEITY IN COMPUTING: INSIGHTS FROM A WORKSHARING SCHEDULING PROBLEM." International Journal of Foundations of Computer Science 22, no. 06 (2011): 1471–93. http://dx.doi.org/10.1142/s0129054111008829.

Full text
Abstract:
Heterogeneity complicates the use of multicomputer platforms. Can it also enhance their performance? How can one measure the power of a heterogeneous assemblage of computers ("cluster"), in absolute terms (how powerful is this cluster) and relative terms (which cluster is more powerful)? Is a cluster that has one super-fast computer and the rest of "average" speed more/less powerful than one all of whose computers are "moderately" fast? If you can replace just one computer in a cluster with a faster one, should you replace the fastest? the slowest? A result concerning "worksharing" in heterogeneous clusters provides a highly idealized, yet algorithmically meaningful, framework for studying such questions in a way that admits rigorous analysis and formal proof. We encounter some surprises as we answer the preceding questions (perforce, within the idealized framework). Highlights: (1) If one can replace only one computer in a cluster by a faster one, it is (almost) always most advantageous to replace the fastest one. (2) If the computers in two clusters have the same mean speed, then the cluster with the larger variance in speed is (almost) always more productive (verified analytically for small clusters and empirically for large ones.) (3) Heterogeneity can actually enhance a cluster's computing power.
APA, Harvard, Vancouver, ISO, and other styles
4

Hatcher, P., M. Reno, G. Antoniu, and L. Bouge. "Cluster Computing with Java." Computing in Science and Engineering 7, no. 2 (2005): 34–39. http://dx.doi.org/10.1109/mcse.2005.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Du, Ran, Jingyan Shi, Xiaowei Jiang, and Jiaheng Zou. "Cosmos : A Unified Accounting System both for the HTCondor and Slurm Clusters at IHEP." EPJ Web of Conferences 245 (2020): 07060. http://dx.doi.org/10.1051/epjconf/202024507060.

Full text
Abstract:
HTCondor was adopted to manage the High Throughput Computing (HTC) cluster at IHEP in 2016. In 2017 a Slurm cluster was set up to run High Performance Computing (HPC) jobs. To provide accounting services for these two clusters, we implemented a unified accounting system named Cosmos. Multiple workloads bring different accounting requirements. Briefly speaking, there are four types of jobs to account. First of all, 30 million single-core jobs run in the HTCondor cluster every year. Secondly, Virtual Machine (VM) jobs run in the legacy HTCondor VM cluster. Thirdly, parallel jobs run in the Slurm cluster, and some of these jobs are run on the GPU worker nodes to accelerate computing. Lastly, some selected HTC jobs are migrated from the HTCondor cluster to the Slurm cluster for research purposes. To satisfy all the mentioned requirements, Cosmos is implemented with four layers: acquisition, integration, statistics and presentation. Details about the issues and solutions of each layer will be presented in the paper. Cosmos has run in production for two years, and the status shows that it is a well-functioning system, also meets the requirements of the HTCondor and Slurm clusters.
APA, Harvard, Vancouver, ISO, and other styles
6

М.Ю., Воробьев, Рыбаков А.А. та Сальников А.Н. "Инструмент для моделирования балансировки потока задач пользователей между несколькими независимыми вычислительными кластерами". Труды НИИСИ РАН 9, № 6 (2020): 142–47. http://dx.doi.org/10.25682/niisi.2019.6.0018.

Full text
Abstract:
Работа посвящена организации программного окружения для анализа алгоритмов балансировки нагрузки на ресурсы группы вычислительных кластеров, используемых для высокопроизводительных вычислений. Моделирование производилось путем одновременного запуска нескольких систем ведения очередей SLURM, работающих на нескольких виртуальных машинах в режиме мультиузла. Поток задач создавался посредством инструмента моделирования pseudo-cluster, модифицированного для поддержки одновременной работы с несколькими кластерами. Система апробирована на модельном журнале вычислительных задач пользователей, состоящем из 2000 задач, составленном по типичным паттернам поведения очередей реальных вычислительных кластеров The work is devoted to the organization of a software environment for the analysis of load balancing algorithms for a group of computing clusters for high-performance computing. To simulate the cluster, the SLURM queuing system was used, running on several virtual machines in multi-node mode. The task flow was created using the pseudo-cluster modeling tool, modified for supporting multiple clusters. The system was tested using model task flow journal of 2000 tasks based on typical task queue behavioural patterns of real computing clusters
APA, Harvard, Vancouver, ISO, and other styles
7

Pushkar, V. I., H. D. Kyselov, YE V. Olenovych, and O. H. Kyivskyi. "Computing cluster performance evaluation in department of the university." Electronics and Communications 15, no. 5 (2010): 211–16. http://dx.doi.org/10.20535/2312-1807.2010.58.5.285236.

Full text
Abstract:
An educational cluster based on the low speed servers was created. Their testing and total performance evaluations were carried out. The testing results describe the possibility of cluster implementation for educational process in the meaning of parallel programming studying and clusters system administrators training.
APA, Harvard, Vancouver, ISO, and other styles
8

Tripathy, Minakshi, and C. R. Tripathy. "A Comparative Analysis of Performance of Shared Memory Cluster Computing Interconnection Systems." Journal of Computer Networks and Communications 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/128438.

Full text
Abstract:
In recent past, many types of shared memory cluster computing interconnection systems have been proposed. Each of these systems has its own advantages and limitations. With the increase in system size of the cluster interconnection systems, the comparative analysis of their various performance measures becomes quite inevitable. The cluster architecture, load balancing, and fault tolerance are some of the important aspects, which need to be addressed. The comparison needs to be made in order to choose the best one for a particular application. In this paper, a detailed comparative study on four important and different classes of shared memory cluster architectures has been made. The systems taken up for the purpose of the study are shared memory clusters, hierarchical shared memory clusters, distributed shared memory clusters, and the virtual distributed shared memory clusters. These clusters are analyzed and compared on the basis of the architecture, load balancing, and fault tolerance aspects. The results of comparison are reported.
APA, Harvard, Vancouver, ISO, and other styles
9

Sheng, Chong Chong, Wei Song Hu, Xin Ming Hu, and Bai Feng Wu. "StreamMAP: Automatic Task Assignment System on GPU Cluster." Advanced Materials Research 926-930 (May 2014): 2414–17. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.2414.

Full text
Abstract:
GPU Clusters which use General-Purpose GPUs (GPGPUs) as accelerators are becoming more and more popular in high performance computing area. Currently the mainly used programming model for GPU cluster is hybrid MPI/CUDA. However, when using this model, programmers tend to need detailed knowledge of the hardware resources, which makes the program more complicated and less portable. In this paper, we present StreamMAP, an automatic task assignment system on GPU Clusters. The main contributions of StreamMAP are (1) It provides powerful yet concise language extension suitable to describe the computing resource demands of cluster tasks. (2) It maintains resource information and implements automatic task assignment for GPU Cluster. Experiments show that StreamMAP provides programmability, portability and performance gains.
APA, Harvard, Vancouver, ISO, and other styles
10

Ranjit, Rajak. "A Comparative Study: Taxonomy of High Performance Computing (HPC)." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 5 (2018): 3386–91. https://doi.org/10.11591/ijece.v8i5.pp3386-3391.

Full text
Abstract:
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Cluster computing"

1

Boden, Harald. "Multidisziplinäre Optimierung und Cluster-Computing /." Heidelberg : Physica-Verl, 1996. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=007156051&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rosu, Marcel-Catalin. "Communication support for cluster computing." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/8256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Hua. "VCLUSTER: A PORTABLE VIRTUAL COMPUTING LIBRARY FOR CLUSTER COMPUTING." Doctoral diss., Orlando, Fla. : University of Central Florida, 2008. http://purl.fcla.edu/fcla/etd/CFE0002339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Chun-ming, and 李俊明. "Efficient communication subsystem for cluster computing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31221245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Chun-ming. "Efficient communication subsystem for cluster computing /." Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20604592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Solsona, Tehàs Francesc. "Coscheduling Techniques for Non-Dedicated Cluster Computing." Doctoral thesis, Universitat Autònoma de Barcelona, 2002. http://hdl.handle.net/10803/3029.

Full text
Abstract:
Los esfuerzos de esta tesis se centran en onstruir una máquina virtual sobre un sistema Cluster que proporcione la doble funcionalidad de ejecutar eficientemente tanto trabajos tradicionales (o locales) de estaciones de trabajo<br/>así como aplicaciones distribuidas. <br/>Para solucionar el problema, deben tenerse en cuenta dos importantes consideraciones: <br/>* Como compartir y planificar los recursos de las diferentes estaciones de trabajo (especialmente la CPU) entre las aplicaciones locales y distribuidas. <br/><br/>* Como gestionar y controlar la totalidad del sistema para<br/> conseguir ejecuciones eficientes de ambos tipos de aplicaciones.<br/><br/>Coscheduling es el principio básico usado para compartir<br/>y planificar la CPU. Cosche-duling se basa en la reducción<br/>del tiempo de espera de comunicación de aplicaciones distribuidas,<br/>planificando simultáneamente todas (o un subconjunto de)<br/>las tareas que la componen. Por lo tanto, mediante el uso<br/>de técnicas de coscheduling, únicamente se puede incrementar<br/>el rendimiento de aplicaciones distribuidas con comunicación<br/>remota entre las tareas que la componen. <br/><br/>Las técnicas de Coscheduling se clasifican en dos grandes<br/>grupos: control-explícito y control-implícito. Esta clasificación<br/>se basa en la forma de coplanificar las tareas distribuidas.<br/>En control-explícito, la coplanificación es realizada por<br/>procesos y (o) procesadores especializados. En cambio, en<br/>control-implícito, las técnicas de coscheduling se realizan<br/>tomando decisiones de planificación localmente, dependiendo<br/>de los eventos que ocurren en cada estación de trabajo. <br/><br/>En este proyecto se presentan dos mecanismos de coscheduling,<br/>los cuales siguen las dos diferentes filosofías explicadas<br/>anteriormente, control-implícito y control-explí-cito. También<br/>proporcionan características adicionales incluyendo un buen<br/>rendimiento en la ejecución de aplicaciones distribuidas,<br/>ejecución simultánea de varias aplicaciones distribuidas,<br/>bajo overhead y también bajo impacto en el rendimiento de<br/>la carga local.<br/><br/>También se presenta un modelo de coscheduling, el cual proporciona<br/>una base teórica para el desarrollo de nuevas técnicas de<br/>control-implícito. La técnica de control-implícito propuesta<br/>se basa en este modelo. <br/><br/>El buen comportamiento de las técnicas de coscheduling presentadas<br/>en este trabajo se analiza en primer lugar por medio de<br/>simulación. También se ha realizado un gran esfuerzo en<br/>la implementación de estas técnicas de coscheduling en un<br/>Cluster real. El estudio de los resultados obtenidos proporciona<br/>una orientación importante para la investigación futura<br/>en el campo de coscheduling. <br/><br/>En la experimentación en el Cluster real, se han utilizado<br/>varios benchmarks distribuidos con diversos patrones de<br/>comunicación de paso de mensajes: regulares e irregulares,<br/>anillos lógicos, todos-a-todos, etc. También se han utilizado<br/>benchmarks que medían diferentes primitivas de comunicación,<br/>tales como barreras, enlaces uni y bidireccionales, etc.<br/>El uso de esta amplia gama de aplicaciones distribuidas<br/>ha servido para demostrar la aplicabilidad de las técnicas<br/>de coscheduling en computación distribuida basados en Clusters.<br>Efforts of this Thesis are centered on constructing a Virtual<br/>Machine over a Cluster system that provides the double functionality<br/>of executing traditional workstation jobs as well as distributed<br/>applications efficiently.<br/><br/>To solve the problem, two major considerations must be addressed:<br/><br/>* How share and schedule the workstation resources (especially<br/> the CPU) between the local and distributed applications. <br/><br/>* How to manage and control the overall system for the efficient<br/> execution of both application kinds. <br/><br/>Coscheduling is the base principle used for the sharing and<br/>scheduling of the CPU. Coscheduling is based on reducing<br/>the communication waiting time of distributed applications<br/>by scheduling their forming tasks, or a subset of them at<br/>the same time. Consequently, non-communicating distributed<br/>applications (CPU bound ones) will not be favored by the<br/>application of coscheduling. Only the performance of distributed<br/>applications with remote communication can be increased<br/>with coscheduling.<br/><br/>Coscheduling techniques follow two major trends: explicit<br/>and implicit control. This classification is based on the<br/>way the distributed tasks are managed and controlled. Basically,<br/>in explicit-control, such work is carried out by specialized<br/>processes and (or) processors. In contrast, in implicit-control,<br/>coscheduling is performed by making local scheduling decisions<br/>depending on the events occurring in each workstation.<br/><br/>Two coscheduling mechanisms which follow the two different<br/>control trends are presented in this project. They also<br/>provide additional features including usability, good performance<br/>in the execution of distributed applications, simultaneous<br/>execution of distributed applications, low overhead and<br/>also low impact on local workload performance. The design<br/>of the coscheduling techniques was mainly influenced by<br/>the optimization of these features.<br/><br/>An implicit-control coscheduling model is also presented.<br/>Some of the features it provides include collecting on-time<br/>performance statistics and the usefulness as a basic scheme<br/>for developing new coscheduling policies. The presented<br/>implicit-control mechanism is based on this model.<br/><br/>The good scheduling behavior of the coscheduling models presented<br/>is shown firstly by simulation, and their performance compared<br/>with other coscheduling techniques in the literature. A<br/>great effort is also made to implement the principal studied<br/>coscheduling techniques in a real Cluster system. Thus,<br/>it is possible to collect performance measurements of the<br/>different coscheduling techniques and compare them in the<br/>same environment. The study of the results obtained will<br/>provide an important orientation for future research in<br/>coscheduling because, to our knowledge, no similar work<br/>(in the literature) has been done before. <br/><br/>Measurements in the real Cluster system were made by using<br/>various distributed benchmarks with different message patterns:<br/>regular and irregular communication patterns, token rings,<br/>all-to-all and so on. Also, communication primitives such<br/>as barriers and basic sending and receiving using one and<br/>two directional links were separately measured. By using<br/>this broad range of distributed applications, an accurate<br/>analysis of the usefulness and applicability of the presented<br/>coscheduling techniques in Cluster computing is performed.
APA, Harvard, Vancouver, ISO, and other styles
7

Jacob, Aju. "Distributed configuration management for reconfigurable cluster computing." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0007181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stewart, Sean. "Deploying a CMS Tier-3 Computing Cluster with Grid-enabled Computing Infrastructure." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2564.

Full text
Abstract:
The Large Hadron Collider (LHC), whose experiments include the Compact Muon Solenoid (CMS), produces over 30 million gigabytes of data annually, and implements a distributed computing architecture—a tiered hierarchy, from Tier-0 through Tier-3—in order to process and store all of this data. Out of all of the computing tiers, Tier-3 clusters allow scientists the most freedom and flexibility to perform their analyses of LHC data. Tier-3 clusters also provide local services such as login and storage services, provide a means to locally host and analyze LHC data, and allow both remote and local users to submit grid-based jobs. Using the Rocks cluster distribution software version 6.1.1, along with the Open Science Grid (OSG) roll version 3.2.35, a grid-enabled CMS Tier-3 computing cluster was deployed at Florida International University’s Modesto A. Maidique campus. Validation metric results from Ganglia, MyOSG, and CMS Dashboard verified a successful deployment.
APA, Harvard, Vancouver, ISO, and other styles
9

Maiti, Anindya. "Distributed cluster computing on high-speed switched LANs." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0012/MQ41741.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Singla, Aman. "Beehive : application-driven systems support for cluster computing." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/8278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Cluster computing"

1

1970-, Buyya Rajkumar, and Szyperski Clemens, eds. Cluster computing. Nova Science Publishers, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1970-, Buyya Rajkumar, ed. High performance cluster computing. Prentice Hall PTR, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hoffmann, Karl Heinz, and Arnd Meyer, eds. Parallel Algorithms and Cluster Computing. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-33541-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Boden, Harald. Multidisziplinäre Optimierung und Cluster-Computing. Physica-Verlag HD, 1996. http://dx.doi.org/10.1007/978-3-642-48081-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lawrence, Sterling Thomas, ed. Beowulf cluster computing with Linux. MIT Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

William, Gropp, Lusk Ewing, and Sterling Thomas Lawrence, eds. Beowulf cluster computing with Linux. 2nd ed. MIT Press, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lawrence, Sterling Thomas, ed. Beowulf cluster computing with Windows. MIT Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fey, Dietmar. Grid-Computing: Grid Computing fu r Computational Science. Springer Berlin, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ciceron, Jimenez, and Ortego Maurice, eds. Cluster computing and multi-hop network research. Nova Science Publishers, Inc., 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Grigoras, Dan, Alex Nicolau, Bernard Toursel, and Bertil Folliot, eds. Advanced Environments, Tools, and Applications for Cluster Computing. Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47840-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Cluster computing"

1

Baker, Mark, John Brooke, Ken Hawick, and Rajkumar Buyya. "Cluster Computing." In Euro-Par 2001 Parallel Processing. Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44681-8_100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Buyya, Rajkumar, Mark Baker, Daniel C. Hyde, and Djamshid Tavangarian. "Cluster Computing." In Euro-Par 2000 Parallel Processing. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44520-x_158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Frenz, Stefan, Michael Schoettner, Ralph Goeckelmann, and Peter Schulthess. "Transactional Cluster Computing." In High Performance Computing and Communications. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11557654_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ong, Hong, and Mark Baker. "Cluster Computing Fundamentals." In Handbook of Computer Networks. John Wiley & Sons, Inc., 2012. http://dx.doi.org/10.1002/9781118256107.ch6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Steele, Guy L., Xiaowei Shen, Josep Torrellas, et al. "Cluster of Workstations." In Encyclopedia of Parallel Computing. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Steele, Guy L., Xiaowei Shen, Josep Torrellas, et al. "Cluster File Systems." In Encyclopedia of Parallel Computing. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chinchalkar, Shirish, Thomas F. Coleman, and Peter Mansfield. "Cluster Computing for Financial Engineering." In Applied Parallel Computing. State of the Art in Scientific Computing. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11558958_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Minartz, Timo, Daniel Molka, Michael Knobloch, et al. "eeClust: Energy-Efficient Cluster Computing." In Competence in High Performance Computing 2010. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24025-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Afrati, Foto N., Vinayak Borkar, Michael Carey, Neoklis Polyzotis, and Jeffrey D. Ullman. "Cluster Computing, Recursion and Datalog." In Datalog Reloaded. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24206-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chiesa, Alessandro, Eran Tromer, and Madars Virza. "Cluster Computing in Zero Knowledge." In Advances in Cryptology - EUROCRYPT 2015. Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-46803-6_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Cluster computing"

1

Li, Jie, George Michelogiannakis, Samuel Maloney, et al. "Job Scheduling in High Performance Computing Systems with Disaggregated Memory Resources." In 2024 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 2024. http://dx.doi.org/10.1109/cluster59578.2024.00033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Farahani, Reza, Narges Mehran, Sashko Ristov, and Radu Prodan. "HEFTLess: A Bi-Objective Serverless Workflow Batch Orchestration on the Computing Continuum." In 2024 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 2024. http://dx.doi.org/10.1109/cluster59578.2024.00032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Oh, Sejeong, Gordon Euhyun Moon, and Sungyong Park. "ML-Based Dynamic Operator-Level Query Mapping for Stream Processing Systems in Heterogeneous Computing Environments." In 2024 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 2024. http://dx.doi.org/10.1109/cluster59578.2024.00027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Anderson, Matthew, and Matthew Sgambati. "Microgrid Integration with High Performance Computing Systems for Microreactor Operation." In 2024 IEEE International Conference on Cluster Computing Workshops (CLUSTER Workshops). IEEE, 2024. http://dx.doi.org/10.1109/clusterworkshops61563.2024.00017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kica, Piotr, Sabina Lichołai, Michał Orzechowski, and Maciej Malawski. "Optimizing Star Aligner for High Throughput Computing in the Cloud." In 2024 IEEE International Conference on Cluster Computing Workshops (CLUSTER Workshops). IEEE, 2024. http://dx.doi.org/10.1109/clusterworkshops61563.2024.00039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sumimoto, Shinji, Kazuya Yamazaki, Yao Hu, and Kengo Nakajima. "Heterogeneous Application Coupling Library for Center-Wide QC-HPC Hybrid Computing." In 2024 IEEE International Conference on Cluster Computing Workshops (CLUSTER Workshops). IEEE, 2024. http://dx.doi.org/10.1109/clusterworkshops61563.2024.00049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Satoh. "Reusable mobile agents for cluster computing." In Proceedings IEEE International Conference on Cluster Computing CLUSTR-03. IEEE, 2003. http://dx.doi.org/10.1109/clustr.2003.1253324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ancona, M. "Cluster computing." In Proceedings Eleventh Euromicro Conference on Parallel, Distributed and Network-Based Processing. IEEE, 2003. http://dx.doi.org/10.1109/empdp.2003.1183567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Crago, Steve, Kyle Dunn, Patrick Eads, et al. "Heterogeneous Cloud Computing." In 2011 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 2011. http://dx.doi.org/10.1109/cluster.2011.49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"Proceedings. IEEE International Conference on Cluster Computing." In Proceedings IEEE International Conference on Cluster Computing CLUSTR-03. IEEE, 2003. http://dx.doi.org/10.1109/clustr.2003.1253292.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Cluster computing"

1

Alsing, Paul, Michael Fanto, and A. M. Smith. Cluster State Quantum Computing. Defense Technical Information Center, 2012. http://dx.doi.org/10.21236/ada572237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Richards, Mark A., and Daniel P. Campbell. Rapidly Reconfigurable High Performance Computing Cluster. Defense Technical Information Center, 2005. http://dx.doi.org/10.21236/ada438586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Duke, D. W., and T. P. Green. [Research toward a heterogeneous networked computing cluster]. Office of Scientific and Technical Information (OSTI), 1998. http://dx.doi.org/10.2172/674884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Haoyuan, Ali Ghodsi, Matei Zaharia, Scott Shenker, and Ion Stoica. Reliable, Memory Speed Storage for Cluster Computing Frameworks. Defense Technical Information Center, 2014. http://dx.doi.org/10.21236/ada611854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, H. Y., J. M. Brandt, and R. C. Armstrong. ATM-based cluster computing for multi-problem domains. Office of Scientific and Technical Information (OSTI), 1996. http://dx.doi.org/10.2172/415338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Eric Burger, Eric Burger. Studying Building Energy Use with a Micro Computing Cluster. Experiment, 2014. http://dx.doi.org/10.18258/3777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ilg, Mark. Multi-Core Computing Cluster for Safety Fan Analysis of Guided Projectiles. Defense Technical Information Center, 2011. http://dx.doi.org/10.21236/ada551790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lele, Sanjiva K. Computing Cluster for Large Scale Turbulence Simulations and Applications in Computational Aeroacoustics. Defense Technical Information Center, 2002. http://dx.doi.org/10.21236/ada406713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abu-Gazaleh, Nael. Using Heterogeneous High Performance Computing Cluster for Supporting Fine-Grained Parallel Applications. Defense Technical Information Center, 2006. http://dx.doi.org/10.21236/ada459900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gottlieb, Sigal. A Heterogeneous Terascale Computing Cluster for the Development of GPU Optimized High Order Numerical Methods. Defense Technical Information Center, 2011. http://dx.doi.org/10.21236/ada566277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!