To see the other types of publications on this topic, follow the link: Compute-Intensive Applications.

Journal articles on the topic 'Compute-Intensive Applications'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Compute-Intensive Applications.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Houstis, Catherine, Sarantos Kapidakis, Evangelos P. Markatos, and Erol Gelenbe. "Execution of compute-intensive applications into parallel machines." Information Sciences 97, no. 1-2 (1997): 83–124. http://dx.doi.org/10.1016/s0020-0255(96)00174-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Dong, Peng Cao, and Yang Xiao. "A parallel arithmetic array for accelerating compute-intensive applications." IEICE Electronics Express 11, no. 4 (2014): 20130981. http://dx.doi.org/10.1587/elex.11.20130981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Song, Xiaojia, Tao Xie, and Stephen Fischer. "Two Reconfigurable NDP Servers: Understanding the Impact of Near-Data Processing on Data Center Applications." ACM Transactions on Storage 17, no. 4 (2021): 1–27. http://dx.doi.org/10.1145/3460201.

Full text
Abstract:
Existing near-data processing (NDP)-powered architectures have demonstrated their strength for some data-intensive applications. Data center servers, however, have to serve not only data-intensive but also compute-intensive applications. An in-depth understanding of the impact of NDP on various data center applications is still needed. For example, can a compute-intensive application also benefit from NDP? In addition, current NDP techniques focus on maximizing the data processing rate by always utilizing all computing resources at all times. Is this “always running in full gear” strategy cons
APA, Harvard, Vancouver, ISO, and other styles
4

Taifi, Moussa, Abdallah Khreishah, and Justin Y. Shi. "Building a Private HPC Cloud for Compute and Data-Intensive Applications." International Journal on Cloud Computing: Services and Architecture 3, no. 2 (2013): 1–20. http://dx.doi.org/10.5121/ijccsa.2013.3201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shalini Lakshmi, A. J., and M. Vijayalakshmi. "A predictive context aware collaborative offloading framework for compute-intensive applications." Journal of Intelligent & Fuzzy Systems 40, no. 1 (2021): 77–88. http://dx.doi.org/10.3233/jifs-182906.

Full text
Abstract:
The resourceful mobile devices with augmented capabilities around human pave the way for utilizing it as delegators for resource-constrained devices to run compute-intensive applications. Such collaborative resource sharing policy among mobile devices throws challenges like identifying competent alternatives for offloading and diminishing time consumption of pre-offload process to accomplish remarkable offloading. This paper presents a Mobile Cloud Computing framework with Predictive Context-Aware Collaborative Offloading Process (PCA-COP) that fixes these challenges through conductive alterna
APA, Harvard, Vancouver, ISO, and other styles
6

Moussa, Taifi1 Abdallah Khreishah2 and Justin Y. Shi1. "BUILDING A PRIVATE HPC CLOUD FOR COMPUTE AND DATA-INTENSIVE APPLICATIONS." International Journal on Cloud Computing: Services and Architecture (IJCCSA) 3, April (2018): 01–20. https://doi.org/10.5281/zenodo.1434573.

Full text
Abstract:
Traditional HPC (High Performance Computing) clusters are best suited for well-formed calculations. The orderly batch-oriented HPC cluster offers maximal potential for performance per application, but limits resource efficiency and user flexibility. An HPC cloud can host multiple virtual HPC clusters, giving the scientists unprecedented flexibility for research and development. With the proper incentive model, resource efficiency will be automatically maximized. In this context, there are three new challenges. The first is the virtualization overheads. The second is the administrative complexi
APA, Harvard, Vancouver, ISO, and other styles
7

Ahuja, Sanjay P., and Bhagavathi Kaza. "Performance Evaluation of Data Intensive Computing In the Cloud." International Journal of Cloud Applications and Computing 4, no. 2 (2014): 34–47. http://dx.doi.org/10.4018/ijcac.2014040103.

Full text
Abstract:
Big data is a topic of active research in the cloud community. With increasing demand for data storage in the cloud, study of data-intensive applications is becoming a primary focus. Data-intensive applications involve high CPU usage for processing large volumes of data on the scale of terabytes or petabytes. While some research exists for the performance effect of data intensive applications in the cloud, none of the research compares the Amazon Elastic Compute Cloud (Amazon EC2) and Google Compute Engine (GCE) clouds using multiple benchmarks. This study performs extensive research on the Am
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Giang, Viera Šipková, Stefan Dlugolinsky, Binh Minh Nguyen, Viet Tran, and Ladislav Hluchý. "A comparative study of operational engineering for environmental and compute-intensive applications." Array 12 (December 2021): 100096. http://dx.doi.org/10.1016/j.array.2021.100096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aulov, V., K. De, D. Drizhuk, et al. "Workload Management Portal for High Energy Physics Applications and Compute Intensive Science." Procedia Computer Science 66 (2015): 564–73. http://dx.doi.org/10.1016/j.procs.2015.11.064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ciżnicki, Miłosz, Michał Kierzynka, Piotr Kopta, Krzysztof Kurowski, and Paweł Gepner. "Benchmarking Data and Compute Intensive Applications on Modern CPU and GPU Architectures." Procedia Computer Science 9 (2012): 1900–1909. http://dx.doi.org/10.1016/j.procs.2012.04.208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Fons, Francisco, Mariano Fons, Enrique Cantó, and Mariano López. "Deployment of Run-Time Reconfigurable Hardware Coprocessors Into Compute-Intensive Embedded Applications." Journal of Signal Processing Systems 66, no. 2 (2011): 191–221. http://dx.doi.org/10.1007/s11265-011-0607-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kuang, Ping, Wenxia Guo, Xiang Xu, Hongjian Li, Wenhong Tian, and Rajkumar Buyya. "Analyzing Energy-Efficiency of Two Scheduling Policies in Compute-Intensive Applications on Cloud." IEEE Access 6 (2018): 45515–26. http://dx.doi.org/10.1109/access.2018.2861462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Jik-Soo, Henrique Andrade, and Alan Sussman. "Principles for designing data-/compute-intensive distributed applications and middleware systems for heterogeneous environments." Journal of Parallel and Distributed Computing 67, no. 7 (2007): 755–71. http://dx.doi.org/10.1016/j.jpdc.2007.04.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Searles, Robert, Michela Taufer, Sunita Chandrasekaran, Stephen Herbein, and Travis Johnston. "Creating a portable, high-level graph analytics paradigm for compute and data-intensive applications." International Journal of High Performance Computing and Networking 1, no. 1 (2017): 1. http://dx.doi.org/10.1504/ijhpcn.2017.10007922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Searles, Robert, Stephen Herbein, Travis Johnston, Michela Taufer, and Sunita Chandrasekaran. "Creating a portable, high-level graph analytics paradigm for compute and data-intensive applications." International Journal of High Performance Computing and Networking 13, no. 1 (2019): 105. http://dx.doi.org/10.1504/ijhpcn.2019.097054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ajani, Taiwo Samuel, Agbotiname Lucky Imoize, and Aderemi A. Atayero. "An Overview of Machine Learning within Embedded and Mobile Devices–Optimizations and Applications." Sensors 21, no. 13 (2021): 4412. http://dx.doi.org/10.3390/s21134412.

Full text
Abstract:
Embedded systems technology is undergoing a phase of transformation owing to the novel advancements in computer architecture and the breakthroughs in machine learning applications. The areas of applications of embedded machine learning (EML) include accurate computer vision schemes, reliable speech recognition, innovative healthcare, robotics, and more. However, there exists a critical drawback in the efficient implementation of ML algorithms targeting embedded applications. Machine learning algorithms are generally computationally and memory intensive, making them unsuitable for resource-cons
APA, Harvard, Vancouver, ISO, and other styles
17

DECKER, K. M., C. JAYEWARDENA, and R. REHMANN. "Libraries and Development Environments for Monte Carlo Simulations of Lattice Gauge Theories on Parallel Computers." International Journal of Modern Physics C 02, no. 01 (1991): 316–21. http://dx.doi.org/10.1142/s0129183191000408.

Full text
Abstract:
We describe the library lgtlib, and lgttool, the corresponding development environment for Monte Carlo simulations of lattice gauge theory on multiprocessor vector computers with shared memory. We explain why distributed memory parallel processor (DMPP) architectures are particularly appealing for compute-intensive scientific applications, and introduce the design of a general application and program development environment system for scientific applications on DMPP architectures.
APA, Harvard, Vancouver, ISO, and other styles
18

Panneerselvam, Arunkumar, and Bhuvaneswari Subbaraman. "Multi-Objective Optimization for scientific workflow task scheduling in IaaS Cloud." International Journal of Engineering & Technology 7, no. 4.6 (2018): 174. http://dx.doi.org/10.14419/ijet.v7i4.6.20457.

Full text
Abstract:
The use of scientific applications on cloud networks increases day by day generating volumes of data and consuming large computational power. These scientific applications find its importance in the field of astronomy, geology, genetics and bio-technology etc. Complex and mission critical scientific applications can be modeled as scientific workflows and can be executed in cloud. The tasks of the scientific applications are generally data intensive and compute intensive. Traditional computer networks are not suitable for handling scientific applications and hence ubiquitous distributed network
APA, Harvard, Vancouver, ISO, and other styles
19

Ninos, Fragkiskos, Konstantinos Karalas, Dimitrios Dechouniotis, and Michael Polemis. "On Microservice-Based Architecture for Digital Forensics Applications: A Competition Policy Perspective." Future Internet 17, no. 4 (2025): 137. https://doi.org/10.3390/fi17040137.

Full text
Abstract:
Digital forensics systems are complex applications consisting of numerous individual components that demand substantial computing resources. By adopting the concept of microservices, forensics applications can be divided into smaller, independently managed services. In this context, cloud resource orchestration platforms like Kubernetes provide augmented functionalities, such as resource scaling, load balancing, and monitoring, supporting every stage of the application’s lifecycle. This article explores the deployment of digital forensics applications over a microservice-based architecture. Le
APA, Harvard, Vancouver, ISO, and other styles
20

Calore, E., A. Gabbana, SF Schifano, and R. Tripiccione. "Optimization of lattice Boltzmann simulations on heterogeneous computers." International Journal of High Performance Computing Applications 33, no. 1 (2017): 124–39. http://dx.doi.org/10.1177/1094342017703771.

Full text
Abstract:
High-performance computing systems are more and more often based on accelerators. Computing applications targeting those systems often follow a host-driven approach, in which hosts offload almost all compute-intensive sections of the code onto accelerators; this approach only marginally exploits the computational resources available on the host CPUs, limiting overall performances. The obvious step forward is to run compute-intensive kernels in a concurrent and balanced way on both hosts and accelerators. In this paper, we consider exactly this problem for a class of applications based on latti
APA, Harvard, Vancouver, ISO, and other styles
21

Balagafshe, Rahman Ghasempour, Alireza Akoushideh, and Asadollah Shahbahrami. "Matrix-matrix multiplication on graphics processing unit platform using tiling technique." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 2 (2022): 1012. http://dx.doi.org/10.11591/ijeecs.v28.i2.pp1012-1019.

Full text
Abstract:
Today’s hardware platforms have parallel processing capabilities and many parallel programming models have been developed. It is necessary to research an efficient implementation of compute-intensive applications using available platforms. Dense matrix-matrix multiplication is an important kernel that is used in many applications, while it is computationally intensive, especially for large matrix sizes. To improve the performance of this kernel, we implement it on the graphics processing unit (GPU) platform using the tiling technique with different tile sizes. Our experimental results show the
APA, Harvard, Vancouver, ISO, and other styles
22

Balagafshe, Rahman Ghasempour, Alireza Akoushideh, and Asadollah Shahbahrami. "Matrix-matrix multiplication on graphics processing unit platform using tiling technique." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 2 (2022): 1012–19. https://doi.org/10.11591/ijeecs.v28.i2.pp1012-1019.

Full text
Abstract:
Today’s hardware platforms have parallel processing capabilities and many parallel programming models have been developed. It is necessary to research an efficient implementation of compute-intensive applications using available platforms. Dense matrix-matrix multiplication is an important kernel that is used in many applications, while it is computationally intensive, especially for large matrix sizes. To improve the performance of this kernel, we implement it on the graphics processing unit (GPU) platform using the tiling technique with different tile sizes. Our experimental results sh
APA, Harvard, Vancouver, ISO, and other styles
23

Cali, Damla Senol, Thomas Anantharaman, Martin Muggli, Samer Al-Saffar, Charles Schoonover, and Neil Miller. "Abstract 2337: Accelerated optical genome mapping analysis with Stratys Compute and Guided Assembly." Cancer Research 84, no. 6_Supplement (2024): 2337. http://dx.doi.org/10.1158/1538-7445.am2024-2337.

Full text
Abstract:
Abstract Background Optical genome maps (OGM) from Bionano enable the detection of genomic structural and copy number variants that cannot be detected by next-generation sequencing (NGS) technologies and are often missed by conventional cytogenetic techniques. Bionano has developed bioinformatics pipelines for calling structural and copy number variants including the Bionano Solve de novo assembly pipeline for constitutional analysis and the Rare Variant Analysis (RVA) pipeline for low-allele-fraction cancer applications. Both pipelines are computationally intensive and currently take 5-10 hou
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Chenyuan, Mohammad Javad Amiri, Jared Asch, Heena Nagda, Qizhen Zhang, and Boon Thau Loo. "FlexChain." Proceedings of the VLDB Endowment 16, no. 1 (2022): 23–36. http://dx.doi.org/10.14778/3561261.3561264.

Full text
Abstract:
While permissioned blockchains enable a family of data center applications, existing systems suffer from imbalanced loads across compute and memory, exacerbating the underutilization of cloud resources. This paper presents FlexChain , a novel permissioned blockchain system that addresses this challenge by physically disaggregating CPUs, DRAM, and storage devices to process different blockchain workloads efficiently. Disaggregation allows blockchain service providers to upgrade and expand hardware resources independently to support a wide range of smart contracts with diverse CPU and memory dem
APA, Harvard, Vancouver, ISO, and other styles
25

Diao, Yu, Yaoxuan Zhang, Yanran Li, and Jie Jiang. "Metal-Oxide Heterojunction: From Material Process to Neuromorphic Applications." Sensors 23, no. 24 (2023): 9779. http://dx.doi.org/10.3390/s23249779.

Full text
Abstract:
As technologies like the Internet, artificial intelligence, and big data evolve at a rapid pace, computer architecture is transitioning from compute-intensive to memory-intensive. However, traditional von Neumann architectures encounter bottlenecks in addressing modern computational challenges. The emulation of the behaviors of a synapse at the device level by ionic/electronic devices has shown promising potential in future neural-inspired and compact artificial intelligence systems. To address these issues, this review thoroughly investigates the recent progress in metal-oxide heterostructure
APA, Harvard, Vancouver, ISO, and other styles
26

S.M.Jaybhaye and V.Z.Attar. "Resource Provisioning for Scientific Workflow Applications using Aws Cloud." International Journal of Engineering and Advanced Technology (IJEAT) 9, no. 3 (2020): 1122–26. https://doi.org/10.35940/ijeat.B4502.029320.

Full text
Abstract:
Cloud computing play a very important role in day to day life of everyone. In recent years cloud services are much popular for hosting the applications. Virtual Machine Instances are the Images of physical machines which are described with its specification and configurations such as number of microprocessor (CPU) cycles, Memory access and network bandwidth. Cloud provider must contribute special interest while designing and implementing the Iaas. The role of quality and service performance is crucial aspects in application execution Scientific workflow based applications are both compute and
APA, Harvard, Vancouver, ISO, and other styles
27

Zhao, Dongyan, Yubo Wang, Jin Shao, et al. "Compute-in-Memory for Numerical Computations." Micromachines 13, no. 5 (2022): 731. http://dx.doi.org/10.3390/mi13050731.

Full text
Abstract:
In recent years, compute-in-memory (CIM) has been extensively studied to improve the energy efficiency of computing by reducing data movement. At present, CIM is frequently used in data-intensive computing. Data-intensive computing applications, such as all kinds of neural networks (NNs) in machine learning (ML), are regarded as ‘soft’ computing tasks. The ‘soft’ computing tasks are computations that can tolerate low computing precision with little accuracy degradation. However, ‘hard’ tasks aimed at numerical computations require high-precision computing and are also accompanied by energy eff
APA, Harvard, Vancouver, ISO, and other styles
28

Ayyappa, B. Kanth Naga. "Compute SNDR-Boosted 22-nm MRAM-Based In-Memory Computing Macro Using Statistical Error Compensation." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–7. https://doi.org/10.55041/ijsrem49276.

Full text
Abstract:
Abstract - The rapid growth of AI and data-intensive applications necessitates energy-efficient and high-performance memory solutions. In-memory computing (IMC) offers a paradigm shift by reducing data movement and enabling computation directly within memory arrays. This work presents a Compute SNDR-Boosted (Statistical Noise and Defect Resilience) 22-nm MRAM-based IMC macro that leverages statistical error compensation to mitigate device-level variability and noise. Our method integrates a statistical correction engine, enhancing the Signal-to-Noise and Distortion Ratio (SNDR), thereby achiev
APA, Harvard, Vancouver, ISO, and other styles
29

Joe, Vijesh. "Review on Advanced Cost Effective Approach for Privacy with Dataset in Cloud Storage." Journal of ISMAC 4, no. 2 (2022): 73–83. http://dx.doi.org/10.36548/jismac.2022.2.001.

Full text
Abstract:
Cloud computing allows customers to run compute and data-intensive applications without the need for a large investment in infrastructure. Additionally, a significant amount of intermediate datasets are created and often saved, in order to reduce the expense of re-computing these applications. It becomes difficult to protect the privacy of intermediate datasets because attackers may be able to retrieve information that is sensitive to privacy via the analysis of several intermediate datasets. Existing techniques to deal with this problem generally endorse the use of encryption for all cloud da
APA, Harvard, Vancouver, ISO, and other styles
30

Shen, Shizhe. "Theoretical analysis and comparison of memory copy accelerators." Applied and Computational Engineering 9, no. 1 (2023): 295–302. http://dx.doi.org/10.54254/2755-2721/9/20230115.

Full text
Abstract:
Over the last few years, more and more compute-intensive and memory-intensive applications have occurred. These applications are not able to be realized without the base of strong computing and high memory performance. The former has been developed successfully and outstrips the latter to certain degrees. So, researchers must overcome the shortcomings of memory copy performance to catch up with the standard of computing ability so that the systematic applications can be improved. This paper introduces two solutions for memory copy accelerators and analyses their advantages and disadvantages. I
APA, Harvard, Vancouver, ISO, and other styles
31

Jararweh, Yaser, Moath Jarrah, and Abdelkader Bousselham. "GPU Scaling." International Journal of Information Technology and Web Engineering 9, no. 4 (2014): 13–23. http://dx.doi.org/10.4018/ijitwe.2014100102.

Full text
Abstract:
Current state-of-the-art GPU-based systems offer unprecedented performance advantages through accelerating the most compute-intensive portions of applications by an order of magnitude. GPU computing presents a viable solution for the ever-increasing complexities in applications and the growing demands for immense computational resources. In this paper the authors investigate different platforms of GPU-based systems, starting from the Personal Supercomputing (PSC) to cloud-based GPU systems. The authors explore and evaluate the GPU-based platforms and the authors present a comparison discussion
APA, Harvard, Vancouver, ISO, and other styles
32

Gebotys, Catherine H. "Optimizing Energy During Systems Synthesis of Computer Intensive Realtime Applications." VLSI Design 7, no. 3 (1998): 303–20. http://dx.doi.org/10.1155/1998/54063.

Full text
Abstract:
Optimizing energy during the synthesis of VLSI systems for realtime-constrained embedded applications is an important new problem. This paper presents a new methodology for simultaneous scheduling and allocation of VLSI systems which minimize estimated energy for large realtime compute intensive applications. Minimization of estimated energy and VLSI chip area using hierarchical decomposition, bin packing algorithms and integer linear programming techniques along with voltage scaling is performed. Common subexpression elimination, precomputation, data regeneration, and loop merging transformat
APA, Harvard, Vancouver, ISO, and other styles
33

Zhai, Yuanzhao, Bo Ding, Pengfei Zhang, and Jie Luo. "Cloudroid Swarm: A QoS-Aware Framework for Multirobot Cooperation Offloading." Wireless Communications and Mobile Computing 2021 (June 18, 2021): 1–18. http://dx.doi.org/10.1155/2021/6631111.

Full text
Abstract:
Computation offloading has been widely recognized as an effective way to promote the capabilities of resource-constrained mobile devices. Recent years have seen a renewal of the importance of this technology in the emerging field of mobile robots, supporting resource-intensive robot applications. However, cooperating to solve complex tasks in the physical world, which is a significant feature of a robot swarm compared to traditional mobile computing devices, has not received in-depth attention in research concerned with traditional computation offloading. In this study, we propose an approach
APA, Harvard, Vancouver, ISO, and other styles
34

Yasudo, Ryota, José G. F. Coutinho, Ana-Lucia Varbanescu, et al. "Analytical Performance Estimation for Large-Scale Reconfigurable Dataflow Platforms." ACM Transactions on Reconfigurable Technology and Systems 14, no. 3 (2021): 1–21. http://dx.doi.org/10.1145/3452742.

Full text
Abstract:
Next-generation high-performance computing platforms will handle extreme data- and compute-intensive problems that are intractable with today’s technology. A promising path in achieving the next leap in high-performance computing is to embrace heterogeneity and specialised computing in the form of reconfigurable accelerators such as FPGAs, which have been shown to speed up compute-intensive tasks with reduced power consumption. However, assessing the feasibility of large-scale heterogeneous systems requires fast and accurate performance prediction. This article proposes Performance Estimation
APA, Harvard, Vancouver, ISO, and other styles
35

La Penna, Giovanni, Davide Tiana, and Paolo Giannozzi. "Measuring Shared Electrons in Extended Molecular Systems: Covalent Bonds from Plane-Wave Representation of Wave Function." Molecules 26, no. 13 (2021): 4044. http://dx.doi.org/10.3390/molecules26134044.

Full text
Abstract:
In the study of materials and macromolecules by first-principle methods, the bond order is a useful tool to represent molecules, bulk materials and interfaces in terms of simple chemical concepts. Despite the availability of several methods to compute the bond order, most applications have been limited to small systems because a high spatial resolution of the wave function and an all-electron representation of the electron density are typically required. Both limitations are critical for large-scale atomistic calculations, even within approximate density-functional theory (DFT) approaches. In
APA, Harvard, Vancouver, ISO, and other styles
36

Avan, Amin, Akramul Azim, and Qusay H. Mahmoud. "A State-of-the-Art Review of Task Scheduling for Edge Computing: A Delay-Sensitive Application Perspective." Electronics 12, no. 12 (2023): 2599. http://dx.doi.org/10.3390/electronics12122599.

Full text
Abstract:
The edge computing paradigm enables mobile devices with limited memory and processing power to execute delay-sensitive, compute-intensive, and bandwidth-intensive applications on the network by bringing the computational power and storage capacity closer to end users. Edge computing comprises heterogeneous computing platforms with resource constraints that are geographically distributed all over the network. As users are mobile and applications change over time, identifying an optimal task scheduling method is a complex multi-objective optimization problem that is NP-hard, meaning the exhausti
APA, Harvard, Vancouver, ISO, and other styles
37

Meng, Xiandong, and Yanqing Ji. "Modern Computational Techniques for the HMMER Sequence Analysis." ISRN Bioinformatics 2013 (September 3, 2013): 1–13. http://dx.doi.org/10.1155/2013/252183.

Full text
Abstract:
This paper focuses on the latest research and critical reviews on modern computing architectures, software and hardware accelerated algorithms for bioinformatics data analysis with an emphasis on one of the most important sequence analysis applications—hidden Markov models (HMM). We show the detailed performance comparison of sequence analysis tools on various computing platforms recently developed in the bioinformatics society. The characteristics of the sequence analysis, such as data and compute-intensive natures, make it very attractive to optimize and parallelize by using both traditional
APA, Harvard, Vancouver, ISO, and other styles
38

Jung, Myoungsoo, Ellis H. Wilson, Wonil Choi, et al. "Exploring the Future of Out-of-Core Computing with Compute-Local Non-Volatile Memory." Scientific Programming 22, no. 2 (2014): 125–39. http://dx.doi.org/10.1155/2014/303810.

Full text
Abstract:
Drawing parallels to the rise of general purpose graphical processing units (GPGPUs) as accelerators for specific high-performance computing (HPC) workloads, there is a rise in the use of non-volatile memory (NVM) as accelerators for I/O-intensive scientific applications. However, existing works have explored use of NVM within dedicated I/O nodes, which are distant from the compute nodes that actually need such acceleration. As NVM bandwidth begins to out-pace point-to-point network capacity, we argue for the need to break from the archetype of completely separated storage. Therefore, in this
APA, Harvard, Vancouver, ISO, and other styles
39

Ramanathan, Saravanan, Nitin Shivaraman, Seima Suryasekaran, Arvind Easwaran, Etienne Borde, and Sebastian Steinhorst. "A survey on time-sensitive resource allocation in the cloud continuum." it - Information Technology 62, no. 5-6 (2020): 241–55. http://dx.doi.org/10.1515/itit-2020-0013.

Full text
Abstract:
AbstractArtificial Intelligence (AI) and Internet of Things (IoT) applications are rapidly growing in today’s world where they are continuously connected to the internet and process, store and exchange information among the devices and the environment. The cloud and edge platform is very crucial to these applications due to their inherent compute-intensive and resource-constrained nature. One of the foremost challenges in cloud and edge resource allocation is the efficient management of computation and communication resources to meet the performance and latency guarantees of the applications.
APA, Harvard, Vancouver, ISO, and other styles
40

Suresh, Salini, and L. Manjunatha Rao. "CCCORE: Cloud Container for Collaborative Research." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 3 (2018): 1659. http://dx.doi.org/10.11591/ijece.v8i3.pp1659-1670.

Full text
Abstract:
Cloud-based research collaboration platforms render scalable, secure and inventive environments that enabled academic and scientific researchers to share research data, applications and provide access to high- performance computing resources. Dynamic allocation of resources according to the unpredictable needs of applications used by researchers is a key challenge in collaborative research environments. We propose the design of Cloud Container based Collaborative Research (CCCORE) framework to address dynamic resource provisioning according to the variable workload of compute and data-intensiv
APA, Harvard, Vancouver, ISO, and other styles
41

Berrazueta-Mena, David, and Byron Navas. "AHA: Design and Evaluation of Compute-Intensive Hardware Accelerators for AMD-Xilinx Zynq SoCs Using HLS IP Flow." Computers 14, no. 5 (2025): 189. https://doi.org/10.3390/computers14050189.

Full text
Abstract:
The increasing complexity of algorithms in embedded applications has amplified the demand for high-performance computing. Heterogeneous embedded systems, particularly FPGA-based systems-on-chip (SoCs), enhance execution speed by integrating hardware accelerator intellectual property (IP) cores. However, traditional low-level IP-core design presents significant challenges. High-level synthesis (HLS) offers a promising alternative, enabling efficient FPGA development through high-level programming languages. Yet, effective methodologies for designing and evaluating heterogeneous FPGA-based SoCs
APA, Harvard, Vancouver, ISO, and other styles
42

Vlădoiu, Monica, and Zoran Constantinescu. "Development Journey of QADPZ - A Desktop Grid Computing Platform." International Journal of Computers Communications & Control 4, no. 1 (2009): 82. http://dx.doi.org/10.15837/ijccc.2009.1.2416.

Full text
Abstract:
In this paper we present QADPZ, an open source system for desktop grid computing, which enables users of a local network or Internet to share resources. QADPZ allows a centralized management and use of the computational resources of idle computers from a network of desktop computers. QADPZ users can submit compute-intensive applications to the system, which are then automatically scheduled for execution. The scheduling is performed according to the hardware and software requirements of the application. Users can later monitor and control the execution of the applications. Each application cons
APA, Harvard, Vancouver, ISO, and other styles
43

Aral, Atakan, Ivona Brandic, Rafael Brundo Uriarte, Rocco De Nicola, and Vincenzo Scoca. "Addressing Application Latency Requirements through Edge Scheduling." Journal of Grid Computing 17, no. 4 (2019): 677–98. http://dx.doi.org/10.1007/s10723-019-09493-z.

Full text
Abstract:
Abstract Latency-sensitive and data-intensive applications, such as IoT or mobile services, are leveraged by Edge computing, which extends the cloud ecosystem with distributed computational resources in proximity to data providers and consumers. This brings significant benefits in terms of lower latency and higher bandwidth. However, by definition, edge computing has limited resources with respect to cloud counterparts; thus, there exists a trade-off between proximity to users and resource utilization. Moreover, service availability is a significant concern at the edge of the network, where ex
APA, Harvard, Vancouver, ISO, and other styles
44

Prakash, Shiv, and Deo P. Vidyarthi. "Observations on Effect of IPC in GA Based Scheduling on Computational Grid." International Journal of Grid and High Performance Computing 4, no. 1 (2012): 67–80. http://dx.doi.org/10.4018/jghpc.2012010105.

Full text
Abstract:
Computational Grid (CG) provides a wide distributed platform for high end compute intensive applications. Inter Process Communication (IPC) affects the performance of a scheduling algorithm drastically. Genetic Algorithms (GA), a search procedure based on the evolutionary computation, is able to solve a class of complex optimization problems. This paper proposes a GA based scheduling model observing the effect of IPC on the performance of scheduling in computational grid. The proposed model studies the effects of Inter Process Communication (IPC), processing rate () and arrival rate (). Simula
APA, Harvard, Vancouver, ISO, and other styles
45

Ephremidze, L., and I. Spitkovsky. "On explicit Wiener–Hopf factorization of 2 × 2 matrices in a vicinity of a given matrix." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476, no. 2238 (2020): 20200027. http://dx.doi.org/10.1098/rspa.2020.0027.

Full text
Abstract:
As it is known, the existence of the Wiener–Hopf factorization for a given matrix is a well-studied problem. Severe difficulties arise, however, when one needs to compute the factors approximately and obtain the partial indices. This problem is very important in various engineering applications and, therefore, remains to be subject of intensive investigations. In the present paper, we approximate a given matrix function and then explicitly factorize the approximation regardless of whether it has stable partial indices. For this reason, a technique developed in the Janashia–Lagvilava matrix spe
APA, Harvard, Vancouver, ISO, and other styles
46

Döbrich, Stefan, and Christian Hochberger. "Low-Complexity Online Synthesis for AMIDAR Processors." International Journal of Reconfigurable Computing 2010 (2010): 1–15. http://dx.doi.org/10.1155/2010/953693.

Full text
Abstract:
Future chip technologies will change the way we deal with hardware design. First of all, logic resources will be available in vast amount. Furthermore, engineering specialized designs for particular applications will no longer be the general approach as the nonrecurring expenses will grow tremendously. Reconfigurable logic has often been promoted as a solution to these problems. Today, it can be found in two varieties: field programmable gate arrays or coarse-grained reconfigurable arrays. Using this type of technology typically requires a lot of expert knowledge, which is not sufficiently ava
APA, Harvard, Vancouver, ISO, and other styles
47

Noor, Fazal, and Hatem ElBoghdadi. "Neural Nets Distributed on Microcontrollers using Metaheuristic Parallel Optimization Algorithm." Annals of Emerging Technologies in Computing 4, no. 4 (2020): 28–38. http://dx.doi.org/10.33166/aetic.2020.04.004.

Full text
Abstract:
Metaheuristic algorithms are powerful methods for solving compute intensive problems. neural Networks, when trained well, are great at prediction and classification type of problems. Backpropagation is the most popular method utilized to obtain the weights of Neural Nets though it has some limitations of slow convergence and getting stuck in a local minimum. In order to overcome these limitations, in this paper, a hybrid method combining the parallel distributed bat algorithm with backpropagation is proposed to compute the weights of the Neural Nets. The aim is to use the hybrid method in appl
APA, Harvard, Vancouver, ISO, and other styles
48

Mendon, Ashwin A., Andrew G. Schmidt, and Ron Sass. "A Hardware Filesystem Implementation with Multidisk Support." International Journal of Reconfigurable Computing 2009 (2009): 1–13. http://dx.doi.org/10.1155/2009/572860.

Full text
Abstract:
Modern High-End Computing systems frequently include FPGAs as compute accelerators. These programmable logic devices now support disk controller IP cores which offer the ability to introduce new, innovative functionalities that, previously, were not practical. This article describes one such innovation: a filesystem implemented in hardware. This has the potential of improving the performance of data-intensive applications by connecting secondary storage directly to FPGA compute accelerators. To test the feasibility of this idea, a Hardware Filesystem was designed with four basic operations (op
APA, Harvard, Vancouver, ISO, and other styles
49

Zhu, Jinqi, Hui Zhao, Yanmin Wei, Chunmei Ma, and Qing Lv. "Unmanned Aerial Vehicle Computation Task Scheduling Based on Parking Resources in Post-Disaster Rescue." Applied Sciences 13, no. 1 (2022): 289. http://dx.doi.org/10.3390/app13010289.

Full text
Abstract:
Natural disasters bring huge loss of life and property to human beings. Unmanned aerial vehicles (UAVs) own the advantages of high mobility, high flexibility, and rapid deployment, and are important equipment during post-disaster rescue. However, UAVs usually have restricted battery and computing power. They are not fit for performing compute-intensive tasks during rescue. Since there are widespread parking resources in a city, multiple parked vehicles working together to compute the applications from UAVs in a post-disaster rescue is investigated to ensure the quality of experience (QoE) of t
APA, Harvard, Vancouver, ISO, and other styles
50

Han, Yan-Bo, Jun-Yi Sun, Gui-Ling Wang, and Hou-Fu Li. "A Cloud-Based BPM Architecture with User-End Distribution of Non-Compute-Intensive Activities and Sensitive Data." Journal of Computer Science and Technology 25, no. 6 (2010): 1157–67. http://dx.doi.org/10.1007/s11390-010-9396-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!