Literatura académica sobre el tema "GPU-CPU"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "GPU-CPU".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "GPU-CPU"

1

Zhu, Ziyu, Xiaochun Tang, and Quan Zhao. "A unified schedule policy of distributed machine learning framework for CPU-GPU cluster." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 39, no. 3 (2021): 529–38. http://dx.doi.org/10.1051/jnwpu/20213930529.

Texto completo
Resumen
With the widespread using of GPU hardware facilities, more and more distributed machine learning applications have begun to use CPU-GPU hybrid cluster resources to improve the efficiency of algorithms. However, the existing distributed machine learning scheduling framework either only considers task scheduling on CPU resources or only considers task scheduling on GPU resources. Even considering the difference between CPU and GPU resources, it is difficult to improve the resource usage of the entire system. In other words, the key challenge in using CPU-GPU clusters for distributed machine lear
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Cui, Pengjie, Haotian Liu, Bo Tang, and Ye Yuan. "CGgraph: An Ultra-Fast Graph Processing System on Modern Commodity CPU-GPU Co-processor." Proceedings of the VLDB Endowment 17, no. 6 (2024): 1405–17. http://dx.doi.org/10.14778/3648160.3648179.

Texto completo
Resumen
In recent years, many CPU-GPU heterogeneous graph processing systems have been developed in both academic and industrial to facilitate large-scale graph processing in various applications, e.g., social networks and biological networks. However, the performance of existing systems can be significantly improved by addressing two prevailing challenges: GPU memory over-subscription and efficient CPU-GPU cooperative processing. In this work, we propose CGgraph, an ultra-fast CPU-GPU graph processing system to address these challenges. In particular, CGgraph overcomes GPU-memory over-subscription by
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Lee, Taekhee, and Young J. Kim. "Massively parallel motion planning algorithms under uncertainty using POMDP." International Journal of Robotics Research 35, no. 8 (2015): 928–42. http://dx.doi.org/10.1177/0278364915594856.

Texto completo
Resumen
We present new parallel algorithms that solve continuous-state partially observable Markov decision process (POMDP) problems using the GPU (gPOMDP) and a hybrid of the GPU and CPU (hPOMDP). We choose the Monte Carlo value iteration (MCVI) method as our base algorithm and parallelize this algorithm using the multi-level parallel formulation of MCVI. For each parallel level, we propose efficient algorithms to utilize the massive data parallelism available on modern GPUs. Our GPU-based method uses the two workload distribution techniques, compute/data interleaving and workload balancing, in order
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Yogatama, Bobbi W., Weiwei Gong, and Xiangyao Yu. "Orchestrating data placement and query execution in heterogeneous CPU-GPU DBMS." Proceedings of the VLDB Endowment 15, no. 11 (2022): 2491–503. http://dx.doi.org/10.14778/3551793.3551809.

Texto completo
Resumen
There has been a growing interest in using GPU to accelerate data analytics due to its massive parallelism and high memory bandwidth. The main constraint of using GPU for data analytics is the limited capacity of GPU memory. Heterogeneous CPU-GPU query execution is a compelling approach to mitigate the limited GPU memory capacity and PCIe bandwidth. However, the design space of heterogeneous CPU-GPU query execution has not been fully explored. We aim to improve state-of-the-art CPU-GPU data analytics engine by optimizing data placement and heterogeneous query execution. First, we introduce a s
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Raju, K., and Niranjan N Chiplunkar. "PERFORMANCE ENHANCEMENT OF CUDA APPLICATIONS BY OVERLAPPING DATA TRANSFER AND KERNEL EXECUTION." Applied Computer Science 17, no. 3 (2021): 5–18. http://dx.doi.org/10.35784/acs-2021-17.

Texto completo
Resumen
The CPU-GPU combination is a widely used heterogeneous computing system in which the CPU and GPU have different address spaces. Since the GPU cannot directly access the CPU memory, prior to invoking the GPU function the input data must be available on the GPU memory. On completion of GPU function, the results of computation are transferred to CPU memory. The CPU-GPU data transfer happens through PCI-Express bus. The PCI-E bandwidth is much lesser than that of GPU memory. The speed at which the data is transferred is limited by the PCI-E bandwidth. Hence, the PCI-E acts as a performance bottlen
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Power, Jason, Joel Hestness, Marc S. Orr, Mark D. Hill, and David A. Wood. "gem5-gpu: A Heterogeneous CPU-GPU Simulator." IEEE Computer Architecture Letters 14, no. 1 (2015): 34–36. http://dx.doi.org/10.1109/lca.2014.2299539.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Abdusalomov, Saidmalikxon Mannop o`g`li. "CPU VA GPU FARQLARI." CENTRAL ASIAN JOURNAL OF EDUCATION AND INNOVATION 2, no. 5 (2023): 168–70. https://doi.org/10.5281/zenodo.7935842.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Liu, Gaogao, Wenbo Yang, Peng Li, et al. "MIMO Radar Parallel Simulation System Based on CPU/GPU Architecture." Sensors 22, no. 1 (2022): 396. http://dx.doi.org/10.3390/s22010396.

Texto completo
Resumen
The data volume and computation task of MIMO radar is huge; a very high-speed computation is necessary for its real-time processing. In this paper, we mainly study the time division MIMO radar signal processing flow, propose an improved MIMO radar signal processing algorithm, raising the MIMO radar algorithm processing speed combined with the previous algorithms, and, on this basis, a parallel simulation system for the MIMO radar based on the CPU/GPU architecture is proposed. The outer layer of the framework is coarse-grained with OpenMP for acceleration on the CPU, and the inner layer of fine
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zou, Yong Ning, Jue Wang, and Jian Wei Li. "Cutting Display of Industrial CT Volume Data Based on GPU." Advanced Materials Research 271-273 (July 2011): 1096–102. http://dx.doi.org/10.4028/www.scientific.net/amr.271-273.1096.

Texto completo
Resumen
The rapid development of Graphic Processor Units (GPU) in recent years in terms of performance and programmability has attracted the attention of those seeking to leverage alternative architectures for better performance than that which commodity CPU can provide. This paper presents a new algorithm for cutting display of computed tomography volume data on the GPU. We first introduce the programming model of the GPU and outline the implementation of techniques for oblique plane cutting display of volume data on both the CPU and GPU. We compare the approaches and present performance results for
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Jiang, Ronglin, Shugang Jiang, Yu Zhang, Ying Xu, Lei Xu, and Dandan Zhang. "GPU-Accelerated Parallel FDTD on Distributed Heterogeneous Platform." International Journal of Antennas and Propagation 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/321081.

Texto completo
Resumen
This paper introduces a (finite difference time domain) FDTD code written in Fortran and CUDA for realistic electromagnetic calculations with parallelization methods of Message Passing Interface (MPI) and Open Multiprocessing (OpenMP). Since both Central Processing Unit (CPU) and Graphics Processing Unit (GPU) resources are utilized, a faster execution speed can be reached compared to a traditional pure GPU code. In our experiments, 64 NVIDIA TESLA K20m GPUs and 64 INTEL XEON E5-2670 CPUs are used to carry out the pure CPU, pure GPU, and CPU + GPU tests. Relative to the pure CPU calculations f
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "GPU-CPU"

1

Fang, Zhuowen. "Java GPU vs CPU Hashing Performance." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33994.

Texto completo
Resumen
In the latest years, the public’s interest in blockchain technology has been growing since it was brought up in 2008, primarily because of its ability to create an immutable ledger, for storing information that never will or can be changed. As an expanding chain structure, the act of nodes adding blocks to the chain is called mining which is regulated by consensus mechanism. In the most widely used consensus mechanism Proof of work, this process is based on computationally heavy guessing of hashes of blocks. Today, there are several prominent ways developed of performing this guessing, thanks
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Dollinger, Jean-François. "A framework for efficient execution on GPU and CPU+GPU systems." Thesis, Strasbourg, 2015. http://www.theses.fr/2015STRAD019/document.

Texto completo
Resumen
Les verrous technologiques rencontrés par les fabricants de semi-conducteurs au début des années deux-mille ont abrogé la flambée des performances des unités de calculs séquentielles. La tendance actuelle est à la multiplication du nombre de cœurs de processeur par socket et à l'utilisation progressive des cartes GPU pour des calculs hautement parallèles. La complexité des architectures récentes rend difficile l'estimation statique des performances d'un programme. Nous décrivons une méthode fiable et précise de prédiction du temps d'exécution de nids de boucles parallèles sur GPU basée sur tro
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Gjermundsen, Aleksander. "CPU and GPU Co-processing for Sound." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11794.

Texto completo
Resumen
When using voice communications, one of the problematic phenomena that can occur, is participants hearing an echo of their own voice. Acoustic echo cancellation (AEC) is used to remove this echo, but can be computationally demanding.The recent OpenCL standard allows high-level programs to be run on both multi-core CPUs, as well as Graphics Processing Units (GPUs) and custom accelerators. This opens up new possibilities for offloading computations, which is especially important for real-time applications. Although many algorithms for image- and video-processing have been studied on the GPU, aud
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

CARLOS, EDUARDO TELLES. "HYBRID FRUSTUM CULLING USING CPU AND GPU." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=31453@1.

Texto completo
Resumen
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>Um dos problemas mais antigos da computação gráfica tem sido a determinação de visibilidade. Vários algoritmos têm sido desenvolvidos para viabilizar modelos cada vez maiores e detalhados. Dentre estes algoritmos, destaca-se o frustum culling, cujo papel é remover objetos que não sejam visíveis ao observador. Esse algoritmo, muito comum em várias aplicações, vem sofrendo melhorias ao longo dos anos, a fim de acelerar ainda mais a sua execução. Apesar de ser tratado como um problema bem resolvido na computação gráfica, alguns pontos ainda po
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Farooqui, Naila. "Runtime specialization for heterogeneous CPU-GPU platforms." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54915.

Texto completo
Resumen
Heterogeneous parallel architectures like those comprised of CPUs and GPUs are a tantalizing compute fabric for performance-hungry developers. While these platforms enable order-of-magnitude performance increases for many data-parallel application domains, there remain several open challenges: (i) the distinct execution models inherent in the heterogeneous devices present on such platforms drives the need to dynamically match workload characteristics to the underlying resources, (ii) the complex architecture and programming models of such systems require substantial application knowledge and e
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Smith, Michael Shawn. "Performance Analysis of Hybrid CPU/GPU Environments." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/300.

Texto completo
Resumen
We present two metrics to assist the performance analyst to gain a unified view of application performance in a hybrid environment: GPU Computation Percentage and GPU Load Balance. We analyze the metrics using a matrix multiplication benchmark suite and a real scientific application. We also extend an experiment management system to support GPU performance data and to calculate and store our GPU Computation Percentage and GPU Load Balance metrics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Wong, Henry Ting-Hei. "Architectures and limits of GPU-CPU heterogeneous systems." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2529.

Texto completo
Resumen
As we continue to be able to put an increasing number of transistors on a single chip, the answer to the perpetual question of what the best processor we could build with the transistors is remains uncertain. Past work has shown that heterogeneous multiprocessor systems provide benefits in performance and efficiency. This thesis explores heterogeneous systems composed of a traditional sequential processor (CPU) and highly parallel graphics processors (GPU). This thesis presents a tightly-coupled heterogeneous chip multiprocessor architecture for general-purpose non-graphics computation and a
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Gummadi, Deepthi. "Improving GPU performance by regrouping CPU-memory data." Thesis, Wichita State University, 2014. http://hdl.handle.net/10057/10959.

Texto completo
Resumen
In order to fast effective analysis of large complex systems, high-performance computing is essential. NVIDIA Compute Unified Device Architecture (CUDA)-assisted central processing unit (CPU) / graphics processing unit (GPU) computing platform has proven its potential to be used in high-performance computing. In CPU/GPU computing, original data and instructions are copied from CPU main memory to GPU global memory. Inside GPU, it would be beneficial to keep the data into shared memory (shared only by the threads of that block) than in the global memory (shared by all threads). However, shared m
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Chen, Wei. "Dynamic Workload Division in GPU-CPU Heterogeneous Systems." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1364250106.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ben, Romdhanne Bilel. "Simulation des réseaux à grande échelle sur les architectures de calculs hétérogènes." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0088/document.

Texto completo
Resumen
La simulation est une étape primordiale dans l'évolution des systèmes en réseaux. L’évolutivité et l’efficacité des outils de simulation est une clef principale de l’objectivité des résultats obtenue, étant donné la complexité croissante des nouveaux des réseaux sans-fils. La simulation a évènement discret est parfaitement adéquate au passage à l'échelle, cependant les architectures logiciel existantes ne profitent pas des avancées récente du matériel informatique comme les processeurs parallèle et les coprocesseurs graphique. Dans ce contexte, l'objectif de cette thèse est de proposer des méc
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Libros sobre el tema "GPU-CPU"

1

Piccoli, María Fabiana. Computación de alto desempeño en GPU. Editorial de la Universidad Nacional de La Plata (EDULP), 2011. http://dx.doi.org/10.35537/10915/18404.

Texto completo
Resumen
Este libro es el resultado del trabajo de investigación sobre las características de la GPU y su adopción como arquitectura masivamente paralela para aplicaciones de propósito general. Su propósito es transformarse en una herramienta útil para guiar los primeros pasos de aquellos que se inician en la computación de alto desempeños en GPU. Pretende resumir el estado del arte considerando la bibliografía propuesta. El objetivo no es solamente describir la arquitectura many-core de la GPU y la herramienta de programación CUDA, sino también conducir al lector hacia el desarrollo de programas con b
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "GPU-CPU"

1

Ou, Zhixin, Juan Chen, Yuyang Sun, et al. "AOA: Adaptive Overclocking Algorithm on CPU-GPU Heterogeneous Platforms." In Algorithms and Architectures for Parallel Processing. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-22677-9_14.

Texto completo
Resumen
AbstractAlthough GPUs have been used to accelerate various convolutional neural network algorithms with good performance, the demand for performance improvement is still continuously increasing. CPU/GPU overclocking technology brings opportunities for further performance improvement in CPU-GPU heterogeneous platforms. However, CPU/GPU overclocking inevitably increases the power of the CPU/GPU, which is not conducive to energy conservation, energy efficiency optimization, or even system stability. How to effectively constrain the total energy to remain roughly unchanged during the CPU/GPU overc
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Stuart, Jeff A., Michael Cox, and John D. Owens. "GPU-to-CPU Callbacks." In Euro-Par 2010 Parallel Processing Workshops. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21878-1_45.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Wille, Mario, Tobias Weinzierl, Gonzalo Brito Gadeschi, and Michael Bader. "Efficient GPU Offloading with OpenMP for a Hyperbolic Finite Volume Solver on Dynamically Adaptive Meshes." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-32041-5_4.

Texto completo
Resumen
AbstractWe identify and show how to overcome an OpenMP bottleneck in the administration of GPU memory. It arises for a wave equation solver on dynamically adaptive block-structured Cartesian meshes, which keeps all CPU threads busy and allows all of them to offload sets of patches to the GPU. Our studies show that multithreaded, concurrent, non-deterministic access to the GPU leads to performance breakdowns, since the GPU memory bookkeeping as offered through OpenMP’s clause, i.e., the allocation and freeing, becomes another runtime challenge besides expensive data transfer and actual computat
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Reinders, James, Ben Ashbaugh, James Brodman, Michael Kinsner, John Pennycook, and Xinmin Tian. "Programming for GPUs." In Data Parallel C++. Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5574-2_15.

Texto completo
Resumen
Abstract Over the last few decades, Graphics Processing Units (GPUs) have evolved from specialized hardware devices capable of drawing images on a screen to general-purpose devices capable of executing complex parallel kernels. Nowadays, nearly every computer includes a GPU alongside a traditional CPU, and many programs may be accelerated by offloading part of a parallel algorithm from the CPU to the GPU.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Shi, Lin, Hao Chen, and Ting Li. "Hybrid CPU/GPU Checkpoint for GPU-Based Heterogeneous Systems." In Communications in Computer and Information Science. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-53962-6_42.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Li, Jie, George Michelogiannakis, Brandon Cook, Dulanya Cooray, and Yong Chen. "Analyzing Resource Utilization in an HPC System: A Case Study of NERSC’s Perlmutter." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-32041-5_16.

Texto completo
Resumen
AbstractResource demands of HPC applications vary significantly. However, it is common for HPC systems to primarily assign resources on a per-node basis to prevent interference from co-located workloads. This gap between the coarse-grained resource allocation and the varying resource demands can lead to HPC resources being not fully utilized. In this study, we analyze the resource usage and application behavior of NERSC’s Perlmutter, a state-of-the-art open-science HPC system with both CPU-only and GPU-accelerated nodes. Our one-month usage analysis reveals that CPUs are commonly not fully uti
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Krol, Dawid, Jason Harris, and Dawid Zydek. "Hybrid GPU/CPU Approach to Multiphysics Simulation." In Progress in Systems Engineering. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-08422-0_130.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sao, Piyush, Richard Vuduc, and Xiaoye Sherry Li. "A Distributed CPU-GPU Sparse Direct Solver." In Lecture Notes in Computer Science. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09873-9_41.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Chen, Lin, Deshi Ye, and Guochuan Zhang. "Online Scheduling on a CPU-GPU Cluster." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38236-9_1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Pitliya, Deepika, and Namita Palecha. "Shared Buffer Crossbar Architecture for GPU-CPU." In Lecture Notes in Electrical Engineering. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-0275-7_60.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "GPU-CPU"

1

Elis, Bengisu, Olga Pearce, David Boehme, Jason Burmark, and Martin Schulz. "Non-Blocking GPU-CPU Notifications to Enable More GPU-CPU Parallelism." In HPCAsia 2024: International Conference on High Performance Computing in Asia-Pacific Region. ACM, 2024. http://dx.doi.org/10.1145/3635035.3635036.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Yang, Yi, Ping Xiang, Mike Mantor, and Huiyang Zhou. "CPU-assisted GPGPU on fused CPU-GPU architectures." In 2012 IEEE 18th International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2012. http://dx.doi.org/10.1109/hpca.2012.6168948.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Rai, Siddharth, and Mainak Chaudhuri. "Improving CPU Performance Through Dynamic GPU Access Throttling in CPU-GPU Heterogeneous Processors." In 2017 IEEE International Parallel and Distributed Processing Symposium: Workshops (IPDPSW). IEEE, 2017. http://dx.doi.org/10.1109/ipdpsw.2017.37.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Chadwick, Jools, Francois Taiani, and Jonathan Beecham. "From CPU to GP-GPU." In the 10th International Workshop. ACM Press, 2012. http://dx.doi.org/10.1145/2405136.2405142.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Wang, Xin, and Wei Zhang. "A Sample-Based Dynamic CPU and GPU LLC Bypassing Method for Heterogeneous CPU-GPU Architectures." In 2017 IEEE Trustcom/BigDataSE/ICESS. IEEE, 2017. http://dx.doi.org/10.1109/trustcom/bigdatase/icess.2017.309.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

K., Raju, Niranjan N. Chiplunkar, and Kavoor Rajanikanth. "A CPU-GPU Cooperative Sorting Approach." In 2019 Innovations in Power and Advanced Computing Technologies (i-PACT). IEEE, 2019. http://dx.doi.org/10.1109/i-pact44901.2019.8960106.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Xu, Yan, Gary Tan, Xiaosong Li, and Xiao Song. "Mesoscopic traffic simulation on CPU/GPU." In the 2nd ACM SIGSIM/PADS conference. ACM Press, 2014. http://dx.doi.org/10.1145/2601381.2601396.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Kerr, Andrew, Gregory Diamos, and Sudhakar Yalamanchili. "Modeling GPU-CPU workloads and systems." In the 3rd Workshop. ACM Press, 2010. http://dx.doi.org/10.1145/1735688.1735696.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kang, SeungGu, Hong Jun Choi, Cheol Hong Kim, Sung Woo Chung, DongSeop Kwon, and Joong Chae Na. "Exploration of CPU/GPU co-execution." In the 2011 ACM Symposium. ACM Press, 2011. http://dx.doi.org/10.1145/2103380.2103388.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Aciu, Razvan-Mihai, and Horia Ciocarlie. "Algorithm for Cooperative CPU-GPU Computing." In 2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC). IEEE, 2013. http://dx.doi.org/10.1109/synasc.2013.53.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "GPU-CPU"

1

Samfass, Philipp. Porting AMG2013 to Heterogeneous CPU+GPU Nodes. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1343001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Smith, Michael. Performance Analysis of Hybrid CPU/GPU Environments. Portland State University Library, 2000. http://dx.doi.org/10.15760/etd.300.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Rudin, Sven. VASP calculations on Chicoma: CPU vs. GPU. Office of Scientific and Technical Information (OSTI), 2023. http://dx.doi.org/10.2172/1962769.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Owens, John. A Programming Framework for Scientific Applications on CPU-GPU Systems. Office of Scientific and Technical Information (OSTI), 2013. http://dx.doi.org/10.2172/1069280.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Pietarila Graham, Anna, Daniel Holladay, Jonah Miller, and Jeffrey Peterson. Spiner-EOSPAC Comparison: performance and accuracy on Power9 CPU and GPU. Office of Scientific and Technical Information (OSTI), 2022. http://dx.doi.org/10.2172/1859858.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kurzak, Jakub, Pitior Luszczek, Mathieu Faverge, and Jack Dongarra. LU Factorization with Partial Pivoting for a Multi-CPU, Multi-GPU Shared Memory System. Office of Scientific and Technical Information (OSTI), 2012. http://dx.doi.org/10.2172/1173291.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Petersen, Mark, and Kieran Ringel. Performance Results on CPU/GPU Exascale Architectures for OMEGA: The Ocean Model for E3SM Global Applications. Office of Scientific and Technical Information (OSTI), 2024. http://dx.doi.org/10.2172/2448297.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Snider, Dale M. DOE SBIR Phase-1 Report on Hybrid CPU-GPU Parallel Development of the Eulerian-Lagrangian Barracuda Multiphase Program. Office of Scientific and Technical Information (OSTI), 2011. http://dx.doi.org/10.2172/1009440.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Anathan, Sheryas, Alan Williams, James Overfelt, et al. Demonstration and performance testing of extreme-resolution simulations with static meshes on Summit (CPU & GPU) for a parked-turbine con%0Cfiguration and an actuator-line (mid-fidelity model) wind farm con%0Cfiguration. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1706223.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!