Academic literature on the topic 'And Cluster Computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'And Cluster Computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "And Cluster Computing"

1

Buyya, Rajkumar, Hai Jin, and Toni Cortes. "Cluster computing." Future Generation Computer Systems 18, no. 3 (January 2002): v—viii. http://dx.doi.org/10.1016/s0167-739x(01)00053-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nwobodo, Ikechukwu. "Cloud Computing: A Detailed Relationship to Grid and Cluster Computing." International Journal of Future Computer and Communication 4, no. 2 (April 2015): 82–87. http://dx.doi.org/10.7763/ijfcc.2015.v4.361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

ROSENBERG, ARNOLD L., and RON C. CHIANG. "HETEROGENEITY IN COMPUTING: INSIGHTS FROM A WORKSHARING SCHEDULING PROBLEM." International Journal of Foundations of Computer Science 22, no. 06 (September 2011): 1471–93. http://dx.doi.org/10.1142/s0129054111008829.

Full text
Abstract:
Heterogeneity complicates the use of multicomputer platforms. Can it also enhance their performance? How can one measure the power of a heterogeneous assemblage of computers ("cluster"), in absolute terms (how powerful is this cluster) and relative terms (which cluster is more powerful)? Is a cluster that has one super-fast computer and the rest of "average" speed more/less powerful than one all of whose computers are "moderately" fast? If you can replace just one computer in a cluster with a faster one, should you replace the fastest? the slowest? A result concerning "worksharing" in heterogeneous clusters provides a highly idealized, yet algorithmically meaningful, framework for studying such questions in a way that admits rigorous analysis and formal proof. We encounter some surprises as we answer the preceding questions (perforce, within the idealized framework). Highlights: (1) If one can replace only one computer in a cluster by a faster one, it is (almost) always most advantageous to replace the fastest one. (2) If the computers in two clusters have the same mean speed, then the cluster with the larger variance in speed is (almost) always more productive (verified analytically for small clusters and empirically for large ones.) (3) Heterogeneity can actually enhance a cluster's computing power.
APA, Harvard, Vancouver, ISO, and other styles
4

Hatcher, P., M. Reno, G. Antoniu, and L. Bouge. "Cluster Computing with Java." Computing in Science and Engineering 7, no. 2 (March 2005): 34–39. http://dx.doi.org/10.1109/mcse.2005.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Du, Ran, Jingyan Shi, Xiaowei Jiang, and Jiaheng Zou. "Cosmos : A Unified Accounting System both for the HTCondor and Slurm Clusters at IHEP." EPJ Web of Conferences 245 (2020): 07060. http://dx.doi.org/10.1051/epjconf/202024507060.

Full text
Abstract:
HTCondor was adopted to manage the High Throughput Computing (HTC) cluster at IHEP in 2016. In 2017 a Slurm cluster was set up to run High Performance Computing (HPC) jobs. To provide accounting services for these two clusters, we implemented a unified accounting system named Cosmos. Multiple workloads bring different accounting requirements. Briefly speaking, there are four types of jobs to account. First of all, 30 million single-core jobs run in the HTCondor cluster every year. Secondly, Virtual Machine (VM) jobs run in the legacy HTCondor VM cluster. Thirdly, parallel jobs run in the Slurm cluster, and some of these jobs are run on the GPU worker nodes to accelerate computing. Lastly, some selected HTC jobs are migrated from the HTCondor cluster to the Slurm cluster for research purposes. To satisfy all the mentioned requirements, Cosmos is implemented with four layers: acquisition, integration, statistics and presentation. Details about the issues and solutions of each layer will be presented in the paper. Cosmos has run in production for two years, and the status shows that it is a well-functioning system, also meets the requirements of the HTCondor and Slurm clusters.
APA, Harvard, Vancouver, ISO, and other styles
6

Pushkar, V. I., H. D. Kyselov, YE V. Olenovych, and O. H. Kyivskyi. "Computing cluster performance evaluation in department of the university." Electronics and Communications 15, no. 5 (March 29, 2010): 211–16. http://dx.doi.org/10.20535/2312-1807.2010.58.5.285236.

Full text
Abstract:
An educational cluster based on the low speed servers was created. Their testing and total performance evaluations were carried out. The testing results describe the possibility of cluster implementation for educational process in the meaning of parallel programming studying and clusters system administrators training.
APA, Harvard, Vancouver, ISO, and other styles
7

Tripathy, Minakshi, and C. R. Tripathy. "A Comparative Analysis of Performance of Shared Memory Cluster Computing Interconnection Systems." Journal of Computer Networks and Communications 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/128438.

Full text
Abstract:
In recent past, many types of shared memory cluster computing interconnection systems have been proposed. Each of these systems has its own advantages and limitations. With the increase in system size of the cluster interconnection systems, the comparative analysis of their various performance measures becomes quite inevitable. The cluster architecture, load balancing, and fault tolerance are some of the important aspects, which need to be addressed. The comparison needs to be made in order to choose the best one for a particular application. In this paper, a detailed comparative study on four important and different classes of shared memory cluster architectures has been made. The systems taken up for the purpose of the study are shared memory clusters, hierarchical shared memory clusters, distributed shared memory clusters, and the virtual distributed shared memory clusters. These clusters are analyzed and compared on the basis of the architecture, load balancing, and fault tolerance aspects. The results of comparison are reported.
APA, Harvard, Vancouver, ISO, and other styles
8

Fowler, A. G., and K. Goyal. "Topological cluster state quantum computing." Quantum Information and Computation 9, no. 9&10 (September 2009): 721–38. http://dx.doi.org/10.26421/qic9.9-10-1.

Full text
Abstract:
The quantum computing scheme described by Raussendorf et. al (2007), when viewed as a cluster state computation, features a 3-D cluster state, novel adjustable strength error correction capable of correcting general errors through the correction of Z errors only, a threshold error rate approaching 1% and low overhead arbitrarily long-range logical gates. In this work, we review the scheme in detail framing the discussion solely in terms of the required 3-D cluster state and its stabilizers.
APA, Harvard, Vancouver, ISO, and other styles
9

Saman, M. Y., and D. J. Evans. "Distributed computing on cluster systems." International Journal of Computer Mathematics 78, no. 3 (January 2001): 383–97. http://dx.doi.org/10.1080/00207160108805118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Thiruvathukal, G. K. "Guest Editors' Introduction: Cluster Computing." Computing in Science and Engineering 7, no. 2 (March 2005): 11–13. http://dx.doi.org/10.1109/mcse.2005.33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "And Cluster Computing"

1

Boden, Harald. "Multidisziplinäre Optimierung und Cluster-Computing /." Heidelberg : Physica-Verl, 1996. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=007156051&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rosu, Marcel-Catalin. "Communication support for cluster computing." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/8256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Hua. "VCLUSTER: A PORTABLE VIRTUAL COMPUTING LIBRARY FOR CLUSTER COMPUTING." Doctoral diss., Orlando, Fla. : University of Central Florida, 2008. http://purl.fcla.edu/fcla/etd/CFE0002339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Chun-ming, and 李俊明. "Efficient communication subsystem for cluster computing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31221245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Chun-ming. "Efficient communication subsystem for cluster computing /." Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20604592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Solsona, Tehàs Francesc. "Coscheduling Techniques for Non-Dedicated Cluster Computing." Doctoral thesis, Universitat Autònoma de Barcelona, 2002. http://hdl.handle.net/10803/3029.

Full text
Abstract:
Los esfuerzos de esta tesis se centran en onstruir una máquina virtual sobre un sistema Cluster que proporcione la doble funcionalidad de ejecutar eficientemente tanto trabajos tradicionales (o locales) de estaciones de trabajo
así como aplicaciones distribuidas.
Para solucionar el problema, deben tenerse en cuenta dos importantes consideraciones:
* Como compartir y planificar los recursos de las diferentes estaciones de trabajo (especialmente la CPU) entre las aplicaciones locales y distribuidas.

* Como gestionar y controlar la totalidad del sistema para
conseguir ejecuciones eficientes de ambos tipos de aplicaciones.

Coscheduling es el principio básico usado para compartir
y planificar la CPU. Cosche-duling se basa en la reducción
del tiempo de espera de comunicación de aplicaciones distribuidas,
planificando simultáneamente todas (o un subconjunto de)
las tareas que la componen. Por lo tanto, mediante el uso
de técnicas de coscheduling, únicamente se puede incrementar
el rendimiento de aplicaciones distribuidas con comunicación
remota entre las tareas que la componen.

Las técnicas de Coscheduling se clasifican en dos grandes
grupos: control-explícito y control-implícito. Esta clasificación
se basa en la forma de coplanificar las tareas distribuidas.
En control-explícito, la coplanificación es realizada por
procesos y (o) procesadores especializados. En cambio, en
control-implícito, las técnicas de coscheduling se realizan
tomando decisiones de planificación localmente, dependiendo
de los eventos que ocurren en cada estación de trabajo.

En este proyecto se presentan dos mecanismos de coscheduling,
los cuales siguen las dos diferentes filosofías explicadas
anteriormente, control-implícito y control-explí-cito. También
proporcionan características adicionales incluyendo un buen
rendimiento en la ejecución de aplicaciones distribuidas,
ejecución simultánea de varias aplicaciones distribuidas,
bajo overhead y también bajo impacto en el rendimiento de
la carga local.

También se presenta un modelo de coscheduling, el cual proporciona
una base teórica para el desarrollo de nuevas técnicas de
control-implícito. La técnica de control-implícito propuesta
se basa en este modelo.

El buen comportamiento de las técnicas de coscheduling presentadas
en este trabajo se analiza en primer lugar por medio de
simulación. También se ha realizado un gran esfuerzo en
la implementación de estas técnicas de coscheduling en un
Cluster real. El estudio de los resultados obtenidos proporciona
una orientación importante para la investigación futura
en el campo de coscheduling.

En la experimentación en el Cluster real, se han utilizado
varios benchmarks distribuidos con diversos patrones de
comunicación de paso de mensajes: regulares e irregulares,
anillos lógicos, todos-a-todos, etc. También se han utilizado
benchmarks que medían diferentes primitivas de comunicación,
tales como barreras, enlaces uni y bidireccionales, etc.
El uso de esta amplia gama de aplicaciones distribuidas
ha servido para demostrar la aplicabilidad de las técnicas
de coscheduling en computación distribuida basados en Clusters.
Efforts of this Thesis are centered on constructing a Virtual
Machine over a Cluster system that provides the double functionality
of executing traditional workstation jobs as well as distributed
applications efficiently.

To solve the problem, two major considerations must be addressed:

* How share and schedule the workstation resources (especially
the CPU) between the local and distributed applications.

* How to manage and control the overall system for the efficient
execution of both application kinds.

Coscheduling is the base principle used for the sharing and
scheduling of the CPU. Coscheduling is based on reducing
the communication waiting time of distributed applications
by scheduling their forming tasks, or a subset of them at
the same time. Consequently, non-communicating distributed
applications (CPU bound ones) will not be favored by the
application of coscheduling. Only the performance of distributed
applications with remote communication can be increased
with coscheduling.

Coscheduling techniques follow two major trends: explicit
and implicit control. This classification is based on the
way the distributed tasks are managed and controlled. Basically,
in explicit-control, such work is carried out by specialized
processes and (or) processors. In contrast, in implicit-control,
coscheduling is performed by making local scheduling decisions
depending on the events occurring in each workstation.

Two coscheduling mechanisms which follow the two different
control trends are presented in this project. They also
provide additional features including usability, good performance
in the execution of distributed applications, simultaneous
execution of distributed applications, low overhead and
also low impact on local workload performance. The design
of the coscheduling techniques was mainly influenced by
the optimization of these features.

An implicit-control coscheduling model is also presented.
Some of the features it provides include collecting on-time
performance statistics and the usefulness as a basic scheme
for developing new coscheduling policies. The presented
implicit-control mechanism is based on this model.

The good scheduling behavior of the coscheduling models presented
is shown firstly by simulation, and their performance compared
with other coscheduling techniques in the literature. A
great effort is also made to implement the principal studied
coscheduling techniques in a real Cluster system. Thus,
it is possible to collect performance measurements of the
different coscheduling techniques and compare them in the
same environment. The study of the results obtained will
provide an important orientation for future research in
coscheduling because, to our knowledge, no similar work
(in the literature) has been done before.

Measurements in the real Cluster system were made by using
various distributed benchmarks with different message patterns:
regular and irregular communication patterns, token rings,
all-to-all and so on. Also, communication primitives such
as barriers and basic sending and receiving using one and
two directional links were separately measured. By using
this broad range of distributed applications, an accurate
analysis of the usefulness and applicability of the presented
coscheduling techniques in Cluster computing is performed.
APA, Harvard, Vancouver, ISO, and other styles
7

Jacob, Aju. "Distributed configuration management for reconfigurable cluster computing." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0007181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stewart, Sean. "Deploying a CMS Tier-3 Computing Cluster with Grid-enabled Computing Infrastructure." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2564.

Full text
Abstract:
The Large Hadron Collider (LHC), whose experiments include the Compact Muon Solenoid (CMS), produces over 30 million gigabytes of data annually, and implements a distributed computing architecture—a tiered hierarchy, from Tier-0 through Tier-3—in order to process and store all of this data. Out of all of the computing tiers, Tier-3 clusters allow scientists the most freedom and flexibility to perform their analyses of LHC data. Tier-3 clusters also provide local services such as login and storage services, provide a means to locally host and analyze LHC data, and allow both remote and local users to submit grid-based jobs. Using the Rocks cluster distribution software version 6.1.1, along with the Open Science Grid (OSG) roll version 3.2.35, a grid-enabled CMS Tier-3 computing cluster was deployed at Florida International University’s Modesto A. Maidique campus. Validation metric results from Ganglia, MyOSG, and CMS Dashboard verified a successful deployment.
APA, Harvard, Vancouver, ISO, and other styles
9

Maiti, Anindya. "Distributed cluster computing on high-speed switched LANs." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0012/MQ41741.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Singla, Aman. "Beehive : application-driven systems support for cluster computing." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/8278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "And Cluster Computing"

1

1970-, Buyya Rajkumar, and Szyperski Clemens, eds. Cluster computing. Huntington, N.Y: Nova Science Publishers, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1970-, Buyya Rajkumar, ed. High performance cluster computing. Upper Saddle River, N.J: Prentice Hall PTR, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Computing networks: From cluster to cloud computing. London: ISTE, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hoffmann, Karl Heinz, and Arnd Meyer, eds. Parallel Algorithms and Cluster Computing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-33541-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Boden, Harald. Multidisziplinäre Optimierung und Cluster-Computing. Heidelberg: Physica-Verlag HD, 1996. http://dx.doi.org/10.1007/978-3-642-48081-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lawrence, Sterling Thomas, ed. Beowulf cluster computing with Linux. Cambridge, Mass: MIT Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lawrence, Sterling Thomas, ed. Beowulf cluster computing with Windows. Cambridge, Mass: MIT Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

William, Gropp, Lusk Ewing, and Sterling Thomas Lawrence, eds. Beowulf cluster computing with Linux. 2nd ed. Cambridge, Mass: MIT Press, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fey, Dietmar. Grid-Computing: Grid Computing fu r Computational Science. Berlin: Springer Berlin, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ciceron, Jimenez, and Ortego Maurice, eds. Cluster computing and multi-hop network research. Hauppauge, N.Y: Nova Science Publishers, Inc., 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "And Cluster Computing"

1

Baker, Mark, John Brooke, Ken Hawick, and Rajkumar Buyya. "Cluster Computing." In Euro-Par 2001 Parallel Processing, 702–3. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44681-8_100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Buyya, Rajkumar, Mark Baker, Daniel C. Hyde, and Djamshid Tavangarian. "Cluster Computing." In Euro-Par 2000 Parallel Processing, 1115–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44520-x_158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Frenz, Stefan, Michael Schoettner, Ralph Goeckelmann, and Peter Schulthess. "Transactional Cluster Computing." In High Performance Computing and Communications, 465–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11557654_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ong, Hong, and Mark Baker. "Cluster Computing Fundamentals." In Handbook of Computer Networks, 79–92. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2012. http://dx.doi.org/10.1002/9781118256107.ch6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Steele, Guy L., Xiaowei Shen, Josep Torrellas, Mark Tuckerman, Eric J. Bohm, Laxmikant V. Kalé, Glenn Martyna, et al. "Cluster of Workstations." In Encyclopedia of Parallel Computing, 289. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Steele, Guy L., Xiaowei Shen, Josep Torrellas, Mark Tuckerman, Eric J. Bohm, Laxmikant V. Kalé, Glenn Martyna, et al. "Cluster File Systems." In Encyclopedia of Parallel Computing, 289. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chinchalkar, Shirish, Thomas F. Coleman, and Peter Mansfield. "Cluster Computing for Financial Engineering." In Applied Parallel Computing. State of the Art in Scientific Computing, 395–403. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11558958_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Minartz, Timo, Daniel Molka, Michael Knobloch, Stephan Krempel, Thomas Ludwig, Wolfgang E. Nagel, Bernd Mohr, and Hugo Falter. "eeClust: Energy-Efficient Cluster Computing." In Competence in High Performance Computing 2010, 111–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24025-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Afrati, Foto N., Vinayak Borkar, Michael Carey, Neoklis Polyzotis, and Jeffrey D. Ullman. "Cluster Computing, Recursion and Datalog." In Datalog Reloaded, 120–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24206-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chiesa, Alessandro, Eran Tromer, and Madars Virza. "Cluster Computing in Zero Knowledge." In Advances in Cryptology - EUROCRYPT 2015, 371–403. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-46803-6_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "And Cluster Computing"

1

Ancona, M. "Cluster computing." In Proceedings Eleventh Euromicro Conference on Parallel, Distributed and Network-Based Processing. IEEE, 2003. http://dx.doi.org/10.1109/empdp.2003.1183567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Satoh. "Reusable mobile agents for cluster computing." In Proceedings IEEE International Conference on Cluster Computing CLUSTR-03. IEEE, 2003. http://dx.doi.org/10.1109/clustr.2003.1253324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Crago, Steve, Kyle Dunn, Patrick Eads, Lorin Hochstein, Dong-In Kang, Mikyung Kang, Devendra Modium, Karandeep Singh, Jinwoo Suh, and John Paul Walters. "Heterogeneous Cloud Computing." In 2011 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 2011. http://dx.doi.org/10.1109/cluster.2011.49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"Proceedings. IEEE International Conference on Cluster Computing." In Proceedings IEEE International Conference on Cluster Computing CLUSTR-03. IEEE, 2003. http://dx.doi.org/10.1109/clustr.2003.1253292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cehn and Schmidt. "Computing large-scale alignments on a multi-cluster." In Proceedings IEEE International Conference on Cluster Computing CLUSTR-03. IEEE, 2003. http://dx.doi.org/10.1109/clustr.2003.1253297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Abawajy and Dandamudi. "Parallel job scheduling on multicluster computing system." In Proceedings IEEE International Conference on Cluster Computing CLUSTR-03. IEEE, 2003. http://dx.doi.org/10.1109/clustr.2003.1253294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kleban and Clearwater. "Interstitial computing: utilizing spare cycles on supercomputers." In Proceedings IEEE International Conference on Cluster Computing CLUSTR-03. IEEE, 2003. http://dx.doi.org/10.1109/clustr.2003.1253295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dauger and Decyk. ""Plug-and-play" cluster computing using Mac OS X." In Proceedings IEEE International Conference on Cluster Computing CLUSTR-03. IEEE, 2003. http://dx.doi.org/10.1109/clustr.2003.1253343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sperhac, Jeanette, Benjamin D. Plessinger, Jeffrey T. Palmer, Rudra Chakraborty, Gregary Dean, Martins Innus, Ryan Rathsam, et al. "Federating XDMoD to Monitor Affiliated Computing Resources." In 2018 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 2018. http://dx.doi.org/10.1109/cluster.2018.00074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fujita, Norihisa, Ryohei Kobayashi, Yoshiki Yamaguchi, Kohji Yoshikawa, Makito Abe, and Masayuki Umemura. "Toward OpenACC-enabled GPU-FPGA Accelerated Computing." In 2020 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 2020. http://dx.doi.org/10.1109/cluster49012.2020.00060.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "And Cluster Computing"

1

Alsing, Paul, Michael Fanto, and A. M. Smith. Cluster State Quantum Computing. Fort Belvoir, VA: Defense Technical Information Center, December 2012. http://dx.doi.org/10.21236/ada572237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Richards, Mark A., and Daniel P. Campbell. Rapidly Reconfigurable High Performance Computing Cluster. Fort Belvoir, VA: Defense Technical Information Center, July 2005. http://dx.doi.org/10.21236/ada438586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Duke, D. W., and T. P. Green. [Research toward a heterogeneous networked computing cluster]. Office of Scientific and Technical Information (OSTI), August 1998. http://dx.doi.org/10.2172/674884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Haoyuan, Ali Ghodsi, Matei Zaharia, Scott Shenker, and Ion Stoica. Reliable, Memory Speed Storage for Cluster Computing Frameworks. Fort Belvoir, VA: Defense Technical Information Center, June 2014. http://dx.doi.org/10.21236/ada611854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, H. Y., J. M. Brandt, and R. C. Armstrong. ATM-based cluster computing for multi-problem domains. Office of Scientific and Technical Information (OSTI), August 1996. http://dx.doi.org/10.2172/415338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Eric Burger, Eric Burger. Studying Building Energy Use with a Micro Computing Cluster. Experiment, October 2014. http://dx.doi.org/10.18258/3777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ilg, Mark. Multi-Core Computing Cluster for Safety Fan Analysis of Guided Projectiles. Fort Belvoir, VA: Defense Technical Information Center, September 2011. http://dx.doi.org/10.21236/ada551790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lele, Sanjiva K. Computing Cluster for Large Scale Turbulence Simulations and Applications in Computational Aeroacoustics. Fort Belvoir, VA: Defense Technical Information Center, August 2002. http://dx.doi.org/10.21236/ada406713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abu-Gazaleh, Nael. Using Heterogeneous High Performance Computing Cluster for Supporting Fine-Grained Parallel Applications. Fort Belvoir, VA: Defense Technical Information Center, October 2006. http://dx.doi.org/10.21236/ada459900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gottlieb, Sigal. A Heterogeneous Terascale Computing Cluster for the Development of GPU Optimized High Order Numerical Methods. Fort Belvoir, VA: Defense Technical Information Center, November 2011. http://dx.doi.org/10.21236/ada566277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography