Academic literature on the topic 'High performace Computation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'High performace Computation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "High performace Computation"

1

Orden, Juan C. Garcia, and Ignacio Romero Olleros. "63999 THERMODYNAMICALLY CONSISTENT DYNAMIC FORMULATION OF DISCRETE THERMOVISCOELASTIC ELEMENTS(High Performance Formalisms and Computation)." Proceedings of the Asian Conference on Multibody Dynamics 2010.5 (2010): _63999–1_—_63999–9_. http://dx.doi.org/10.1299/jsmeacmd.2010.5._63999-1_.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hall, Michael J., Neil E. Olson, and Roger D. Chamberlain. "Utilizing Virtualized Hardware Logic Computations to Benefit Multi-User Performance." Electronics 10, no. 6 (March 12, 2021): 665. http://dx.doi.org/10.3390/electronics10060665.

Full text
Abstract:
Recent trends in computer architecture have increased the role of dedicated hardware logic as an effective approach to computation. Virtualization of logic computations (i.e., by sharing a fixed function) provides a means to effectively utilize hardware resources by context switching the logic to support multiple data streams of computation. Multiple applications or users can take advantage of this by using the virtualized computation in an accelerator as a computational service, such as in a software as a service (SaaS) model over a network. In this paper, we analyze the performance of virtualized hardware logic and develop M/G/1 queueing model equations and simulation models to predict system performance. We predict system performance using the queueing model and tune a schedule for optimal performance. We observe that high variance and high load give high mean latency. The simulation models validate the queueing model, predict queue occupancy, show that a Poisson input process distribution (assumed in the queueing model) is reasonable for low load, and expand the set of scheduling algorithms considered.
APA, Harvard, Vancouver, ISO, and other styles
3

Nanjo, Takao, Naoki Sugano, and Etsujiro Imanishi. "56492 FAST SIMULATION OF FLEXIBLE MULTIBODY DYNAMICS USING IMPROVED DOMAIN DECOMPOSITION TECHNIQUE(High Performance Formalisms and Computation)." Proceedings of the Asian Conference on Multibody Dynamics 2010.5 (2010): _56492–1_—_56492–8_. http://dx.doi.org/10.1299/jsmeacmd.2010.5._56492-1_.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shiiba, Taichi, and Naoya Machida. "58291 Efficiency Evaluation of the Real-time Multibody Analysis with Matrix Libraries(High Performance Formalisms and Computation)." Proceedings of the Asian Conference on Multibody Dynamics 2010.5 (2010): _58291–1_—_58291–7_. http://dx.doi.org/10.1299/jsmeacmd.2010.5._58291-1_.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Heyn, Toby, Dan Negrut, and Alessandro Tasora. "59056 Tracked Vehicle Simulation on Granular Terrain Leveraging Parallel Computing on GPUs(High Performance Formalisms and Computation)." Proceedings of the Asian Conference on Multibody Dynamics 2010.5 (2010): _59056–1_—_59056–10_. http://dx.doi.org/10.1299/jsmeacmd.2010.5._59056-1_.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Goswami, Sukalyan, and Kuntal Mukherjee. "High Performance Fault Tolerant Resource Scheduling in Computational Grid Environment." International Journal of Web-Based Learning and Teaching Technologies 15, no. 1 (January 2020): 73–87. http://dx.doi.org/10.4018/ijwltt.2020010104.

Full text
Abstract:
Virtual resources team up to create a computational grid, which is used in computation-intensive problem solving. A majority of these problems require high performance resources to compute and generate results, making grid computation another type of high performance computing. The optimization in computational grids relates to resource utilization which in turn is achieved by the proper distribution of loads among participating resources. This research takes up an adaptive resource ranking approach, and improves the effectiveness of NDFS algorithm by scheduling jobs in those ranked resources, thereby increasing the number of job deadlines met and service quality agreements met. Moreover, resource failure is taken care of by introducing a partial backup approach. The benchmark codes of Fast Fourier Transform and Matrix Multiplication are executed in a real test bed of a computational grid, set up by Globus Toolkit 5.2 for the justification of propositions made in this article.
APA, Harvard, Vancouver, ISO, and other styles
7

Pietras, M., and P. Klęsk. "FPGA implementation of logarithmic versions of Baum-Welch and Viterbi algorithms for reduced precision hidden Markov models." Bulletin of the Polish Academy of Sciences Technical Sciences 65, no. 6 (December 1, 2017): 935–47. http://dx.doi.org/10.1515/bpasts-2017-0101.

Full text
Abstract:
Abstract This paper presents a programmable system-on-chip implementation to be used for acceleration of computations within hidden Markov models. The high level synthesis (HLS) and “divide-and-conquer” approaches are presented for parallelization of Baum-Welch and Viterbi algorithms. To avoid arithmetic underflows, all computations are performed within the logarithmic space. Additionally, in order to carry out computations efficiently – i.e. directly in an FPGA system or a processor cache – we postulate to reduce the floating-point representations of HMMs. We state and prove a lemma about the length of numerically unsafe sequences for such reduced precision models. Finally, special attention is devoted to the design of a multiple logarithm and exponent approximation unit (MLEAU). Using associative mapping, this unit allows for simultaneous conversions of multiple values and thereby compensates for computational efforts of logarithmic-space operations. Design evaluation reveals absolute stall delay occurring by multiple hardware conversions to logarithms and to exponents, and furthermore the experiments evaluation reveals HMMs computation boundaries related to their probabilities and floating-point representation. The performance differences at each stage of computation are summarized in performance comparison between hardware acceleration using MLEAU and typical software implementation on an ARM or Intel processor.
APA, Harvard, Vancouver, ISO, and other styles
8

Monfared, Alireza K., Ellen W. Zegura, Mostafa Ammar, David Doria, and David Bruno. "Computational ferrying: Efficient scheduling of computation on a mobile high performance computer." Computer Communications 96 (December 2016): 110–22. http://dx.doi.org/10.1016/j.comcom.2016.09.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kozinsky, Boris, and David J. Singh. "Thermoelectrics by Computational Design: Progress and Opportunities." Annual Review of Materials Research 51, no. 1 (July 26, 2021): 565–90. http://dx.doi.org/10.1146/annurev-matsci-100520-015716.

Full text
Abstract:
The performance of thermoelectric materials is determined by their electrical and thermal transport properties that are very sensitive to small modifications of composition and microstructure. Discovery and design of next-generation materials are starting to be accelerated by computational guidance. We review progress and challenges in the development of accurate and efficient first-principles methods for computing transport coefficients and illustrate approaches for both rapid materials screening and focused optimization. Particularly important and challenging are computations of electron and phonon scattering rates that enter the Boltzmann transport equations, and this is where there are many opportunities for improving computational methods. We highlight the first successful examples of computation-driven discoveries of high-performance materials and discuss avenues for tightening the interaction between theoretical and experimental materials discovery and optimization.
APA, Harvard, Vancouver, ISO, and other styles
10

FERLIN, EDSON PEDRO, HEITOR SILVÉRIO LOPES, CARLOS R. ERIG LIMA, and MAURÍCIO PERRETTO. "A FPGA-BASED RECONFIGURABLE PARALLEL ARCHITECTURE FOR HIGH-PERFORMANCE NUMERICAL COMPUTATION." Journal of Circuits, Systems and Computers 20, no. 05 (August 2011): 849–65. http://dx.doi.org/10.1142/s0218126611007645.

Full text
Abstract:
Many real-world engineering problems require high computational power, especially regarding the processing time. Current parallel processing techniques play an important role in reducing the processing time. Recently, reconfigurable computation has gained large attention thanks to its ability to combine hardware performance and software flexibility. Also, the availability of high-density Field Programmable Gate Array devices and corresponding development systems allowed the popularization of reconfigurable computation, encouraging the development of very complex, compact, and powerful systems for custom applications. This work presents an architecture for parallel reconfigurable computation based on the dataflow concept. This architecture allows reconfigurability of the system for many problems and, particularly, for numerical computation. Several experiments were done analyzing the scalability of the architecture, as well as comparing its performance with other approaches. Overall results are relevant and promising. The developed architecture has performance and scalability suited for engineering problems that demand intensive numerical computation.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "High performace Computation"

1

Reis, Ruy Freitas. "Simulações numéricas 3D em ambiente paralelo de hipertermia com nanopartículas magnéticas." Universidade Federal de Juiz de Fora (UFJF), 2014. https://repositorio.ufjf.br/jspui/handle/ufjf/3499.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-02-24T15:43:42Z No. of bitstreams: 1 ruyfreitasreis.pdf: 10496081 bytes, checksum: 05695a7e896bd684b83ab5850df95449 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T19:28:45Z (GMT) No. of bitstreams: 1 ruyfreitasreis.pdf: 10496081 bytes, checksum: 05695a7e896bd684b83ab5850df95449 (MD5)
Made available in DSpace on 2017-03-06T19:28:45Z (GMT). No. of bitstreams: 1 ruyfreitasreis.pdf: 10496081 bytes, checksum: 05695a7e896bd684b83ab5850df95449 (MD5) Previous issue date: 2014-11-05
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Este estudo tem como objetivo a modelagem numérica do tratamento de tumores sólidos com hipertermia utilizando nanopartículas magnéticas, considerando o modelo tridimensional de biotransferência de calor proposto por Pennes (1948). Foram comparadas duas diferentes possibilidades de perfusão sanguínea, a primeira constante e, a segunda, dependente da temperatura. O tecido é modelado com as camadas de pele, gordura e músculo, além do tumor. Para encontrar a solução aproximada do modelo foi aplicado o método das diferenças finitas (MDF) em um meio heterogêneo. Devido aos diferentes parâmetros de perfusão, foram obtidos sistemas de equações lineares (perfusão constante) e não lineares (perfusão dependente da temperatura). No domínio do tempo foram utilizados dois esquemas numéricos explícitos, o primeiro utilizando o método clássico de Euler e o segundo um algoritmo do tipo preditor-corretor adaptado dos métodos de integração generalizada da família-alpha trapezoidal. Uma vez que a execução de um modelo tridimensional demanda um alto custo computacional, foram empregados dois esquemas de paralelização do método numérico, o primeiro baseado na API de programação paralela OpenMP e o segundo com a plataforma CUDA. Os resultados experimentais mostraram que a paralelização em OpenMP obteve aceleração de até 39 vezes comparada com a versão serial, e, além disto, a versão em CUDA também foi eficiente, obtendo um ganho de 242 vezes, também comparando-se com o tempo de execução sequencial. Assim, o resultado da execução é obtido cerca de duas vezes mais rápido do que o fenômeno biológico.
This work deals with the numerical modeling of solid tumor treatments with hyperthermia using magnetic nanoparticles considering a 3D bioheat transfer model proposed by Pennes(1948). Two different possibilities of blood perfusion were compared, the first assumes a constant value, and the second one a temperature-dependent function. The living tissue was modeled with skin, fat and muscle layers, in addition to the tumor. The model solution was approximated with the finite difference method (FDM) in an heterogeneous medium. Due to different blood perfusion parameters, a system of linear equations (constant perfusion), and a system of nonlinear equations (temperaturedependent perfusion) were obtained. To discretize the time domain, two explicit numerical strategies were used, the first one was using the classical Euler method, and the second one a predictor-corrector algorithm originated from the generalized trapezoidal alpha-family of time integration methods. Since the computational time required to solve a threedimensional model is large, two different parallel strategies were applied to the numerical method. The first one uses the OpenMP parallel programming API, and the second one the CUDA platform. The experimental results showed that the parallelization using OpenMP improves the performance up to 39 times faster than the sequential execution time, and the CUDA version was also efficient, yielding gains up to 242 times faster than the sequential execution time. Thus, this result ensures an execution time twice faster than the biological phenomenon.
APA, Harvard, Vancouver, ISO, and other styles
2

Campos, Joventino de Oliveira. "Método de lattice Boltzmann para simulação da eletrofisiologia cardíaca em paralelo usando GPU." Universidade Federal de Juiz de Fora (UFJF), 2015. https://repositorio.ufjf.br/jspui/handle/ufjf/3555.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-06T20:24:42Z No. of bitstreams: 1 joventinodeoliveiracampos.pdf: 3604904 bytes, checksum: aca8053f097ddcb9d96ba51186838610 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T20:27:39Z (GMT) No. of bitstreams: 1 joventinodeoliveiracampos.pdf: 3604904 bytes, checksum: aca8053f097ddcb9d96ba51186838610 (MD5)
Made available in DSpace on 2017-03-06T20:27:39Z (GMT). No. of bitstreams: 1 joventinodeoliveiracampos.pdf: 3604904 bytes, checksum: aca8053f097ddcb9d96ba51186838610 (MD5) Previous issue date: 2015-06-26
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Este trabalho apresenta o método de lattice Boltzmann (MLB) para simulações computacionais da atividade elétrica cardíaca usando o modelo monodomínio. Uma implementação otimizada do método de lattice Boltzmann é apresentada, a qual usa um modelo de colisão com múltiplos parâmetros de relaxação conhecido como multiple relaxation time (MRT), para considerar a anisotropia do tecido cardíaco. Com foco em simulações rápidas da dinâmica cardíaca, devido ao alto grau de paralelismo presente no MLB, uma implementação que executa em uma unidade de processamento gráfico (GPU) foi realizada e seu desempenho foi estudado através de domínios tridimensionais regulares e irregulares. Os resultados da implementação para simulações cardíacas mostraram fatores de aceleração tão altos quanto 500x para a simulação global e para o MLB um desempenho de 419 mega lattice update per second (MLUPS) foi alcançado. Com tempos de execução próximos ao tempo real em um único computador equipado com uma GPU moderna, estes resultados mostram que este trabalho é uma proposta promissora para aplicação em ambiente clínico.
This work presents the lattice Boltzmann method (LBM) for computational simulations of the cardiac electrical activity using monodomain model. An optimized implementation of the lattice Boltzmann method is presented which uses a collision model with multiple relaxation parameters known as multiple relaxation time (MRT) in order to consider the anisotropy of the cardiac tissue. With focus on fast simulations of cardiac dynamics, due to the high level of parallelism present in the LBM, a GPU parallelization was performed and its performance was studied under regular and irregular three-dimensional domains. The results of our optimized LBM GPU implementation for cardiac simulations shown acceleration factors as high as 500x for the overall simulation and for the LBM a performance of 419 mega lattice updates per second (MLUPS) was achieved. With near real time simulations in a single computer equipped with a modern GPU these results show that the proposed framework is a promising approach for application in a clinical workflow.
APA, Harvard, Vancouver, ISO, and other styles
3

Isa, Mohammad Nazrin. "High performance reconfigurable architectures for biological sequence alignment." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/7721.

Full text
Abstract:
Bioinformatics and computational biology (BCB) is a rapidly developing multidisciplinary field which encompasses a wide range of domains, including genomic sequence alignments. It is a fundamental tool in molecular biology in searching for homology between sequences. Sequence alignments are currently gaining close attention due to their great impact on the quality aspects of life such as facilitating early disease diagnosis, identifying the characteristics of a newly discovered sequence, and drug engineering. With the vast growth of genomic data, searching for a sequence homology over huge databases (often measured in gigabytes) is unable to produce results within a realistic time, hence the need for acceleration. Since the exponential increase of biological databases as a result of the human genome project (HGP), supercomputers and other parallel architectures such as the special purpose Very Large Scale Integration (VLSI) chip, Graphic Processing Unit (GPUs) and Field Programmable Gate Arrays (FPGAs) have become popular acceleration platforms. Nevertheless, there are always trade-off between area, speed, power, cost, development time and reusability when selecting an acceleration platform. FPGAs generally offer more flexibility, higher performance and lower overheads. However, they suffer from a relatively low level programming model as compared with off-the-shelf microprocessors such as standard microprocessors and GPUs. Due to the aforementioned limitations, the need has arisen for optimized FPGA core implementations which are crucial for this technology to become viable in high performance computing (HPC). This research proposes the use of state-of-the-art reprogrammable system-on-chip technology on FPGAs to accelerate three widely-used sequence alignment algorithms; the Smith-Waterman with affine gap penalty algorithm, the profile hidden Markov model (HMM) algorithm and the Basic Local Alignment Search Tool (BLAST) algorithm. The three novel aspects of this research are firstly that the algorithms are designed and implemented in hardware, with each core achieving the highest performance compared to the state-of-the-art. Secondly, an efficient scheduling strategy based on the double buffering technique is adopted into the hardware architectures. Here, when the alignment matrix computation task is overlapped with the PE configuration in a folded systolic array, the overall throughput of the core is significantly increased. This is due to the bound PE configuration time and the parallel PE configuration approach irrespective of the number of PEs in a systolic array. In addition, the use of only two configuration elements in the PE optimizes hardware resources and enables the scalability of PE systolic arrays without relying on restricted onboard memory resources. Finally, a new performance metric is devised, which facilitates the effective comparison of design performance between different FPGA devices and families. The normalized performance indicator (speed-up per area per process technology) takes out advantages of the area and lithography technology of any FPGA resulting in fairer comparisons. The cores have been designed using Verilog HDL and prototyped on the Alpha Data ADM-XRC-5LX card with the Virtex-5 XC5VLX110-3FF1153 FPGA. The implementation results show that the proposed architectures achieved giga cell updates per second (GCUPS) performances of 26.8, 29.5 and 24.2 respectively for the acceleration of the Smith-Waterman with affine gap penalty algorithm, the profile HMM algorithm and the BLAST algorithm. In terms of speed-up improvements, comparisons were made on performance of the designed cores against their corresponding software and the reported FPGA implementations. In the case of comparison with equivalent software execution, acceleration of the optimal alignment algorithm in hardware yielded an average speed-up of 269x as compared to the SSEARCH 35 software. For the profile HMM-based sequence alignment, the designed core achieved speed-up of 103x and 8.3x against the HMMER 2.0 and the latest version of HMMER (version 3.0) respectively. On the other hand, the implementation of the gapped BLAST with the two-hit method in hardware achieved a greater than tenfold speed-up compared to the latest NCBI BLAST software. In terms of comparison against other reported FPGA implementations, the proposed normalized performance indicator was used to evaluate the designed architectures fairly. The results showed that the first architecture achieved more than 50 percent improvement, while acceleration of the profile HMM sequence alignment in hardware gained a normalized speed-up of 1.34. In the case of the gapped BLAST with the two-hit method, the designed core achieved 11x speed-up after taking out advantages of the Virtex-5 FPGA. In addition, further analysis was conducted in terms of cost and power performances; it was noted that, the core achieved 0.46 MCUPS per dollar spent and 958.1 MCUPS per watt. This shows that FPGAs can be an attractive platform for high performance computation with advantages of smaller area footprint as well as represent economic ‘green’ solution compared to the other acceleration platforms. Higher throughput can be achieved by redeploying the cores on newer, bigger and faster FPGAs with minimal design effort.
APA, Harvard, Vancouver, ISO, and other styles
4

Nasar-Ullah, Q. A. "High performance parallel financial derivatives computation." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1431080/.

Full text
Abstract:
Computing the price and risk of financial derivatives is a necessary activity for many financial market participants and is often undertaken by large and costly computing farms. This thesis seeks to explore the use of parallel computing, with particular focus on graphics processing units (GPUs), to improve the speed per cost ratio of such computation. This thesis addresses three distinct layers of high performance parallel financial derivatives computation: the first layer is related to the formulation of parallel algorithms that are generally used in the context of derivatives. The second layer is related to the optimum computation of pricing models, which consist of a series of computational steps or algorithms, where such pricing models are used to calculate the price and risk of individual derivatives. The third and final layer is related to deploying several pricing models within large scale infrastructures with particular focus on optimal scheduling approaches. Several contributions are made within this thesis: (i) with regard to the formulation of parallel algorithms, we introduce novel approaches for evaluating the normal cumulative distribution function (CDF), calculating option implied volatility, calibrating SABR (stochastic-αβρ) volatility models and generating CDF lookup tables. (ii) With regard to pricing models, we explore the computation of two dominant fixed income pricing models, namely non-callable bullet options and callable bond options. (iii) With regard to the computation of many such pricing models within large scale infrastructures, we devise and verify novel scheduling approaches that are able to optimally allocate tasks between a heterogeneous mix of CPU and GPU processors.
APA, Harvard, Vancouver, ISO, and other styles
5

Ahrens, James P. "Scientific experiment management with high-performance distributed computation /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/6974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pandya, Ajay Kirit. "Performance of multithreaded computations on high-speed networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ32212.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pilkey, Deborah F. "Computation of a Damping Matrix for Finite Element Model Updating." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30453.

Full text
Abstract:
The characterization of damping is important in making accurate predictions of both the true response and the frequency response of any device or structure dominated by energy dissipation. The process of modeling damping matrices and experimental verification of those is challenging because damping can not be determined via static tests as can mass and stiffness. Furthermore, damping is more difficult to determine from dynamic measurements than natural frequency. However, damping is extremely important in formulating predictive models of structures. In addition, damping matrix identification may be useful in diagnostics or health monitoring of structures. The objective of this work is to find a robust, practical procedure to identify damping matrices. All aspects of the damping identification procedure are investigated. The procedures for damping identification presented herein are based on prior knowledge of the finite element or analytical mass matrices and measured eigendata. Alternately, a procedure is based on knowledge of the mass and stiffness matrices and the eigendata. With this in mind, an exploration into model reduction and updating is needed to make the problem more complete for practical applications. Additionally, high performance computing is used as a tool to deal with large problems. High Performance Fortran is exploited for this purpose. Finally, several examples, including one experimental example are used to illustrate the use of these new damping matrix identification algorithms and to explore their robustness.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Steen, Adrianus Jan van der. "Benchmarking of high performance computers for scientific and technical computation." [S.l.] : Utrecht : [s.n.] ; Universiteitsbibliotheek Utrecht [Host], 1997. http://www.ubu.ruu.nl/cgi-bin/grsn2url?01761909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Yu. "High performance Monte Carlo computation for finance risk data analysis." Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/8206.

Full text
Abstract:
Finance risk management has been playing an increasingly important role in the finance sector, to analyse finance data and to prevent any potential crisis. It has been widely recognised that Value at Risk (VaR) is an effective method for finance risk management and evaluation. This thesis conducts a comprehensive review on a number of VaR methods and discusses in depth their strengths and limitations. Among these VaR methods, Monte Carlo simulation and analysis has proven to be the most accurate VaR method in finance risk evaluation due to its strong modelling capabilities. However, one major challenge in Monte Carlo analysis is its high computing complexity of O(n²). To speed up the computation in Monte Carlo analysis, this thesis parallelises Monte Carlo using the MapReduce model, which has become a major software programming model in support of data intensive applications. MapReduce consists of two functions - Map and Reduce. The Map function segments a large data set into small data chunks and distribute these data chunks among a number of computers for processing in parallel with a Mapper processing a data chunk on a computing node. The Reduce function collects the results generated by these Map nodes (Mappers) and generates an output. The parallel Monte Carlo is evaluated initially in a small scale MapReduce experimental environment, and subsequently evaluated in a large scale simulation environment. Both experimental and simulation results show that the MapReduce based parallel Monte Carlo is greatly faster than the sequential Monte Carlo in computation, and the accuracy level is maintained as well. In data intensive applications, moving huge volumes of data among the computing nodes could incur high overhead in communication. To address this issue, this thesis further considers data locality in the MapReduce based parallel Monte Carlo, and evaluates the impacts of data locality on the performance in computation.
APA, Harvard, Vancouver, ISO, and other styles
10

Vetter, Jeffrey Scott. "Techniques and optimizations for high performance computational steering." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/9242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "High performace Computation"

1

Chaudhary, Vipin. Computation checkpointing and migration. Hauppauge NY: Nova Science Publishers, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

P, Lee H., Kumar K, Institute of High Performance Computing (Singapore), and National University of Singapore, eds. Recent advances in computational science & engineering: Proceedings of the International Conference on Scientific and Engineering Computation (IC-SEC) 2002 ; 3-5 December 2002, Raffles City Convention Centre, Singapore. London: Imperial College Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Krause, Egon, Yurii I. Shokin, Michael Resch, and Nina Shokina, eds. Computational Science and High Performance Computing. Berlin/Heidelberg: Springer-Verlag, 2005. http://dx.doi.org/10.1007/3-540-32376-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ng, Michael K., Andrei Doncescu, Laurence T. Yang, and Tau Leng, eds. High Performance Computational Science and Engineering. Boston, MA: Springer US, 2005. http://dx.doi.org/10.1007/b104300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Topping, B. H. V., and L. Lämmer, eds. High Performance Computing for Computational Mechanics. Stirlingshire, UK: Saxe-Coburg Publications, 2000. http://dx.doi.org/10.4203/csets.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Topping, B. H. V., ed. Computational Mechanics using High Performance Computing. Stirlingshire, UK: Saxe-Coburg Publications, 2002. http://dx.doi.org/10.4203/csets.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Krause, Egon, Yurii I. Shokin, Michael Resch, and Nina Shokina, eds. Computational Science and High Performance Computing III. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-69010-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Krause, Egon, Yurii Shokin, Michael Resch, Dietmar Kröner, and Nina Shokina, eds. Computational Science and High Performance Computing IV. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-17770-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mrozek, Dariusz. High-Performance Computational Solutions in Protein Bioinformatics. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-06971-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Krause, Egon, Yurii Shokin, Michael Resch, and Nina Shokina, eds. Computational Science and High Performance Computing II. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-31768-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "High performace Computation"

1

Sano, Kentaro. "FPGA-Based Systolic Computational-Memory Array for Scalable Stencil Computations." In High-Performance Computing Using FPGAs, 279–303. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-1791-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ashley, John, and Mark Joshi. "Manycore Parallel Computation." In High-Performance Computing in Finance, 471–507. Boca Raton, FL : CRC Press, 2018.: Chapman and Hall/CRC, 2018. http://dx.doi.org/10.1201/9781315372006-16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nüssle, Mondrian, Holger Fröning, Sven Kapferer, and Ulrich Brüning. "Accelerate Communication, not Computation!" In High-Performance Computing Using FPGAs, 507–42. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-1791-0_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Arbenz, Peter, Walter Gander, and Michael Oettli. "The Remote Computation System." In High-Performance Computing and Networking, 662–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61142-8_611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Noble, Michael S., and Stoyanka Zlateva. "Scientific Computation with JavaSpaces." In High-Performance Computing and Networking, 657–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-48228-8_75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shewale, Ashwini, Nayan Waghmare, Anuja Sonawane, Utkarsha Teke, and Santosh D. Kumar. "High Performance Computation Analysis for Medical Images Using High Computational Method." In Advances in Computing Applications, 193–208. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-2630-0_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

van Liere, Robert, Jurriaan D. Mulder, and Jarke J. van Wijk. "Computational steering." In High-Performance Computing and Networking, 696–702. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61142-8_616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Guest, M. F., P. Sherwood, and J. A. Nichols. "Massive Parallelism: The Hardware for Computational Chemistry?" In High-Performance Computing, 259–72. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4615-4873-7_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hluchy, Ladislav, Giang T. Nguyen, Ladislav Halada, and Viet D. Tran. "Cluster Computation for Flood Simulations1." In High-Performance Computing and Networking, 425–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-48228-8_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Foster, Ian. "High-Performance Computational Grids." In High Performance Computing Systems and Applications, 17–18. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5611-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "High performace Computation"

1

Shewale, Ashwini, Nayan Waghmare, Anuja Sonawane, and Utkarsha Teke. "High Performance Computation Analysis for Medical Images using High Computational Methods." In the Second International Conference. New York, New York, USA: ACM Press, 2016. http://dx.doi.org/10.1145/2905055.2905111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Couchman, Hugh. "Computational Astrophysics." In 2008 22nd High performance Computing Symposium (HPCS). IEEE, 2008. http://dx.doi.org/10.1109/hpcs.2008.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pennycuff, Corey, and Tim Weninger. "Fast, exact graph diameter computation with vertex programming." In 1st High Performance Graph Mining workshop. Barcelona Supercomputing Center, 2015. http://dx.doi.org/10.5821/hpgm15.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wörister, Michael, Harald Steinlechner, Stefan Maierhofer, and Robert F. Tobler. "Lazy incremental computation for efficient scene graph rendering." In the 5th High-Performance Graphics Conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2492045.2492051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Roux, Y., J. Wackers, and L. Dorez. "Slamming Computation on the Multihull Groupama 3." In Innovation in High Performance Sailing Yachts 2010. RINA, 2010. http://dx.doi.org/10.3940/rina.innovsail.2010.01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Skidmore, J. L., M. J. Sottile, J. E. Cuny, and A. D. Malony. "A Prototype Notebook-Based Environment for Computational Tools Computational Tools." In SC98 - High Performance Networking and Computing Conference. IEEE, 1998. http://dx.doi.org/10.1109/sc.1998.10031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lucas, Robert F., and Gene Wagenbreth. "Multifrontal computations on accelerators." In 2014 IEEE High Performance Extreme Computing Conference (HPEC). IEEE, 2014. http://dx.doi.org/10.1109/hpec.2014.7040971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Asaduzzaman, S., and Muthucumaru Maheswaran. "Towards a Decentralized Algorithm for Mapping Network and Computational Resources for Distributed Data-Flow Computations." In 21st International Symposium on High Performance Computing Systems and Applications (HPCS'07). IEEE, 2007. http://dx.doi.org/10.1109/hpcs.2007.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cai, Jonathon, Muthu Baskaran, Benoit Meister, and Richard Lethin. "Optimization of symmetric tensor computations." In 2015 IEEE High Performance Extreme Computing Conference (HPEC). IEEE, 2015. http://dx.doi.org/10.1109/hpec.2015.7322458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kayum, N., A. Baddourah, and O. Hajjar. "Methods to Overlap Communication with Computation." In Third EAGE Workshop on High Performance Computing for Upstream. Netherlands: EAGE Publications BV, 2017. http://dx.doi.org/10.3997/2214-4609.201702326.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "High performace Computation"

1

Resasco, Diana C., and Martin H. Schultz. High Performance Computer Models in Computational Acoustics. Fort Belvoir, VA: Defense Technical Information Center, September 1997. http://dx.doi.org/10.21236/ada628567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sales, B., and H. Lyon. Materials by computational design -- High performance thermoelectric materials. Office of Scientific and Technical Information (OSTI), April 1997. http://dx.doi.org/10.2172/541927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Guest, M. F., E. Apra, and D. E. Bernholdt. High performance computational chemistry: Towards fully distributed parallel algorithms. Office of Scientific and Technical Information (OSTI), July 1994. http://dx.doi.org/10.2172/10162988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bruno, Oscar P. High-Performance Computational Electromagnetics in Frequency-Domain and Time-Domain. Fort Belvoir, VA: Defense Technical Information Center, March 2015. http://dx.doi.org/10.21236/ada622789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sookoor, Tamim I., David L. Bruno, and Dale R. Shires. Allocating Tactical High-Performance Computer (HPC) Resources to Offloaded Computation in Battlefield Scenarios. Fort Belvoir, VA: Defense Technical Information Center, December 2013. http://dx.doi.org/10.21236/ada593253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

James, Conrad D., Adrian B. Schiess, Jamie Howell, Michael J. Baca, L. Donald Partridge, Patrick Sean Finnegan, Steven L. Wolfley, et al. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing. Office of Scientific and Technical Information (OSTI), October 2013. http://dx.doi.org/10.2172/1096252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

McGraw, J. R., G. Hedstrom, and T. De Groot. ONT High Gain Initiative WRAP (Wide Area Rapid Acoustic Prediction) computational performance section. Office of Scientific and Technical Information (OSTI), October 1990. http://dx.doi.org/10.2172/6223856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ayoul-Guilmard, Q., S. Ganesh, F. Nobile, R. Badia, J. Ejarque, L. Cirrottola, A. Froehly, et al. D1.4 Final public Release of the solver. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.2.009.

Full text
Abstract:
This deliverable presents the final software release of Kratos Multiphysics, together with the XMC library, Hyperloom and PyCOMPSs API definitions [13]. This release also contains the latest developements on MPI parallel remeshing in ParMmg. This report is meant to serve as a supplement to the public release of the software. Kratos is “a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface”. XMC is “a Python library for parallel, adaptive, hierarchical Monte Carlo algorithms, aiming at reliability, modularity, extensibility and high performance“. Hyperloom and PyCOMPSs are environments for enabling parallel and distributed computation. ParMmg is an open source software which offers the parallel mesh adaptation of three dimensional volume meshes.
APA, Harvard, Vancouver, ISO, and other styles
9

Kittridge, Mark, Roberto Lopez-Anido, Jacob Marquis, Deborah Williams, Thomas Snape, Shawn Eary, Christopher J. Duncan, and Keith A. Berube. Advanced Design and Optimization of High Performance Combatant Craft: Material Testing and Computational Tools. Fort Belvoir, VA: Defense Technical Information Center, May 2012. http://dx.doi.org/10.21236/ada563417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Prinja, Anil K., and David A. Dixon. Moment-Prserving Computational Approach for High Energy Charged Particle Transport Second Interim Performance Report. Fort Belvoir, VA: Defense Technical Information Center, November 2013. http://dx.doi.org/10.21236/ada591799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography