To see the other types of publications on this topic, follow the link: Computation-intensive.

Journal articles on the topic 'Computation-intensive'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computation-intensive.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

James, F. "Computation-intensive Calculation Techniques." Europhysics News 19, no. 2 (1988): 21–22. http://dx.doi.org/10.1051/epn/19881902021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

McCabe, Donna. "Second Annual Intensive Workshop in Sound Computation." Computer Music Journal 17, no. 1 (1993): 70. http://dx.doi.org/10.2307/3680574.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mistry, Amitkumar, and Rahul Kher. "Embedded Software Optimization for Computation - Intensive Applications." Journal of Electrical and Electronic Engineering 8, no. 2 (2020): 42. http://dx.doi.org/10.11648/j.jeee.20200802.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Krishnaswamy, S., Seng Wai Loke, and A. Zaslavsky. "Estimating computation times of data-intensive applications." IEEE Distributed Systems Online 5, no. 4 (2004): 1–12. http://dx.doi.org/10.1109/mdso.2004.1301253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, Han Lin. "An Improved MapReduce Model for Computation-Intensive Task." Advanced Materials Research 756-759 (September 2013): 1701–5. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.1701.

Full text
Abstract:
MapReduce is a widely adopted parallel programming model. The standard MapReduce model is designed for data-intensive processing. However, some machine learning algorithms are computation-intensive and time-consuming tasks which process the same data set repeatedly. In this paper, we proposed an improved MapReduce model for computation-intensive algorithms. The model is constructed from a service combination perspective. In the model, the whole task is divided into lots of subtasks taking account into the algorithms parameters, and the datagram with acknowledgement mechanism is used as the communication channel among cluster workers. We took the multifractal detrended fluctuation analysis algorithm as an example to demonstrate the model.
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar, M. "Measuring parallelism in computation-intensive scientific/engineering applications." IEEE Transactions on Computers 37, no. 9 (1988): 1088–98. http://dx.doi.org/10.1109/12.2259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pukite, P. R., and J. Pukite. "Digital signal processors for computation intensive statistics and simulation." ACM SIGSIM Simulation Digest 21, no. 2 (1990): 20–29. http://dx.doi.org/10.1145/382264.382432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kulkarni, R. D., and B. F. Momin. "Skyline computation for frequent queries in update intensive environment." Journal of King Saud University - Computer and Information Sciences 28, no. 4 (2016): 447–56. http://dx.doi.org/10.1016/j.jksuci.2015.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kuang, Qiaobin, Jie Gong, Xiang Chen, and Xiao Ma. "Analysis on Computation-Intensive Status Update in Mobile Edge Computing." IEEE Transactions on Vehicular Technology 69, no. 4 (2020): 4353–66. http://dx.doi.org/10.1109/tvt.2020.2974816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cheng, S. T., and A. K. Agrawala. "Optimal Replication of Series-Parallel Graphs for Computation-Intensive Applications." Journal of Parallel and Distributed Computing 28, no. 2 (1995): 113–29. http://dx.doi.org/10.1006/jpdc.1995.1094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Yang, Jian, Ke Zeng, Han Hu, and Hongsheng Xi. "Dynamic Cluster Reconfiguration for Energy Conservation in Computation Intensive Service." IEEE Transactions on Computers 61, no. 10 (2012): 1401–16. http://dx.doi.org/10.1109/tc.2011.173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Crowley, P. H. "Resampling Methods for Computation-Intensive Data Analysis in Ecology and Evolution." Annual Review of Ecology and Systematics 23, no. 1 (1992): 405–47. http://dx.doi.org/10.1146/annurev.es.23.110192.002201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Akl, Selim G., Michel Cosnard, and Afonso G. Ferreira. "Data-movement-intensive problems: two folk theorems in parallel computation revisited." Theoretical Computer Science 95, no. 2 (1992): 323–37. http://dx.doi.org/10.1016/0304-3975(92)90271-g.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

LiWang, Minghui, Seyyedali Hosseinalipour, Zhibin Gao, Yuliang Tang, Lianfen Huang, and Huaiyu Dai. "Allocation of Computation-Intensive Graph Jobs Over Vehicular Clouds in IoV." IEEE Internet of Things Journal 7, no. 1 (2020): 311–24. http://dx.doi.org/10.1109/jiot.2019.2949602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bystrov, Oleg, Ruslan Pacevič, and Arnas Kačeniauskas. "Performance of Communication- and Computation-Intensive SaaS on the OpenStack Cloud." Applied Sciences 11, no. 16 (2021): 7379. http://dx.doi.org/10.3390/app11167379.

Full text
Abstract:
The pervasive use of cloud computing has led to many concerns, such as performance challenges in communication- and computation-intensive services on virtual cloud resources. Most evaluations of the infrastructural overhead are based on standard benchmarks. Therefore, the impact of communication issues and infrastructure services on the performance of parallel MPI-based computations remains unclear. This paper presents the performance analysis of communication- and computation-intensive software based on the discrete element method, which is deployed as a service (SaaS) on the OpenStack cloud. The performance measured on KVM-based virtual machines and Docker containers of the OpenStack cloud is compared with that obtained by using native hardware. The improved mapping of computations to multicore resources reduced the internode MPI communication by 34.4% and increased the parallel efficiency from 0.67 to 0.78, which shows the importance of communication issues. Increasing the number of parallel processes, the overhead of the cloud infrastructure increased to 13.7% and 11.2% of the software execution time on native hardware in the case of the Docker containers and KVM-based virtual machines of the OpenStack cloud, respectively. The observed overhead was mainly caused by OpenStack service processes that increased the load imbalance of parallel MPI-based SaaS.
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Yang, Jie Yang, Yuan Huang, Lixiong Xu, Siguang Li, and Man Qi. "MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning." Computational Intelligence and Neuroscience 2015 (2015): 1–13. http://dx.doi.org/10.1155/2015/297672.

Full text
Abstract:
Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.
APA, Harvard, Vancouver, ISO, and other styles
17

Singh, H., Ming-Hau Lee, Guangming Lu, F. J. Kurdahi, N. Bagherzadeh, and E. M. Chaves Filho. "MorphoSys: an integrated reconfigurable system for data-parallel and computation-intensive applications." IEEE Transactions on Computers 49, no. 5 (2000): 465–81. http://dx.doi.org/10.1109/12.859540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Shih, Chi-Sheng, Joen Chen, Yu-Hsin Wang, and Norman Chang. "Heterogeneous and Elastic Computation Framework for Mobile Cloud Computing." International Journal of Software Engineering and Knowledge Engineering 24, no. 07 (2014): 1013–37. http://dx.doi.org/10.1142/s0218194014400051.

Full text
Abstract:
The number and variety of applications for mobile devices continue to grow. However, the resources on mobile devices including computation and storage do not keep pace with the growth. How to incorporate the computation capacity on cloud servers into mobile computing has been desired and challenge issues to resolve. In this work, we design an elastic computation framework to take advantage the heterogeneous computation capacity on cloud servers, which consist of CPUs and GPGPUs, to meet the computation demands of ever growing mobile applications. The computation framework extends OpenCL framework to link remote processors with local mobile applications. The framework is flexible in the sense that the computation can be stopped at any time and gains results, which is called imprecise computation in real-time computing literature. The framework has been evaluated against OpenCL benchmark and physical computation engine for gaming. The results show that the framework supports OpenCL benchmark, RODINIA, without modifying the codes with few exceptions. The elastic computation framework allows the cloud servers to support more mobile clients without sacrificing their QoS requirements. The experiment results also show that IO intensive applications do not perform well when the network capacity is insufficient or unreliable.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhai, Yuanzhao, Bo Ding, Pengfei Zhang, and Jie Luo. "Cloudroid Swarm: A QoS-Aware Framework for Multirobot Cooperation Offloading." Wireless Communications and Mobile Computing 2021 (June 18, 2021): 1–18. http://dx.doi.org/10.1155/2021/6631111.

Full text
Abstract:
Computation offloading has been widely recognized as an effective way to promote the capabilities of resource-constrained mobile devices. Recent years have seen a renewal of the importance of this technology in the emerging field of mobile robots, supporting resource-intensive robot applications. However, cooperating to solve complex tasks in the physical world, which is a significant feature of a robot swarm compared to traditional mobile computing devices, has not received in-depth attention in research concerned with traditional computation offloading. In this study, we propose an approach named cooperation offloading, which offloads the intensive communication among robots as well as the computation for compute-intensive and data-intensive tasks. We analyze the performance gain of cooperation offloading by formalizing multirobot cooperative models; in addition, we study offloading decisions. Based on this approach, we design a cloud robotic framework named Cloudroid Swarm and develop several QoS-aware mechanisms to provide a general solution to cooperation offloading with QoS assurance in multirobot cooperative scenes. We implement Cloudroid Swarm to transparently migrate multirobot applications to cloud servers without any code modification. We evaluate our framework using three different multirobot cooperative applications. Our results show that Cloudroid Swarm can be applied to various robotic applications and real-world environments and bring significant benefits in terms of both network optimization and task performance. Besides, our framework has good scalability and can do support as many as 256 robot entities simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
20

Xu, Shilin, and Caili Guo. "Computation Offloading in a Cognitive Vehicular Networks with Vehicular Cloud Computing and Remote Cloud Computing." Sensors 20, no. 23 (2020): 6820. http://dx.doi.org/10.3390/s20236820.

Full text
Abstract:
To satisfy the explosive growth of computation-intensive vehicular applications, we investigated the computation offloading problem in a cognitive vehicular networks (CVN). Specifically, in our scheme, the vehicular cloud computing (VCC)- and remote cloud computing (RCC)-enabled computation offloading were jointly considered. So far, extensive research has been conducted on RCC-based computation offloading, while the studies on VCC-based computation offloading are relatively rare. In fact, due to the dynamic and uncertainty of on-board resource, the VCC-based computation offloading is more challenging then the RCC one, especially under the vehicular scenario with expensive inter-vehicle communication or poor communication environment. To solve this problem, we propose to leverage the VCC’s computation resource for computation offloading with a perception-exploitation way, which mainly comprise resource discovery and computation offloading two stages. In resource discovery stage, upon the action-observation history, a Long Short-Term Memory (LSTM) model is proposed to predict the on-board resource utilizing status at next time slot. Thereafter, based on the obtained computation resource distribution, a decentralized multi-agent Deep Reinforcement Learning (DRL) algorithm is proposed to solve the collaborative computation offloading with VCC and RCC. Last but not least, the proposed algorithms’ effectiveness is verified with a host of numerical simulation results from different perspectives.
APA, Harvard, Vancouver, ISO, and other styles
21

Elhosuieny, Abdulrahman, Mofreh Salem, Amr Thabet, and Abdelhameed Ibrahim. "ADOMC-NPR Automatic Decision-Making Offloading Framework for Mobile Computation Using Nonlinear Polynomial Regression Model." International Journal of Web Services Research 16, no. 4 (2019): 53–73. http://dx.doi.org/10.4018/ijwsr.2019100104.

Full text
Abstract:
Nowadays, mobile computation applications attract major interest of researchers. Limited processing power and short battery lifetime is an obstacle in executing computationally-intensive applications. This article presents a mobile computation automatic decision-making offloading framework. The proposed framework consists of two phases: adaptive learning, and modeling and runtime computation offloading. In the adaptive phase, curve-fitting (CF) technique based on non-linear polynomial regression (NPR) methodology is used to build an approximate time-predicting model that can estimate the execution time for spending the processing of the detected-intensive applications. The runtime computation phase uses the time predicting model for computing the predicted execution time to decide whether to run the application remotely and perform the offloading process or to run the application locally. Eventually, the RESTful web service is applied to carry out the offloading task in the case of a positive offloading decision. The proposed framework experimentally outperforms a competitive state-of-the-art technique by 73% concerning the time factor. The proposed time-predicting model records minimal deviation of the originally obtained values as it is applied 0.4997, 8.9636, 0.0020, and 0.6797 on the mean squared error metric for matrix-determinant, image-sharpening, matrix-multiplication, and n-queens problems, respectively.
APA, Harvard, Vancouver, ISO, and other styles
22

Ahuja, Sanjay P., and Jesus Zambrano. "Mobile Cloud Computing: Offloading Mobile Processing to the Cloud." Computer and Information Science 9, no. 1 (2016): 90. http://dx.doi.org/10.5539/cis.v9n1p90.

Full text
Abstract:
<p class="zhengwen">The current proliferation of mobile systems, such as smart phones and tablets, has let to their adoption as the primary computing platforms for many users. This trend suggests that designers will continue to aim towards the convergence of functionality on a single mobile device (such as phone + mp3 player + camera + Web browser + GPS + mobile apps + sensors). However, this conjunction penalizes the mobile system both with respect to computational resources such as processor speed, memory consumption, disk capacity, and in weight, size, ergonomics and the component most important to users, battery life. Therefore, energy consumption and response time are major concerns when executing complex algorithms on mobile devices because they require significant resources to solve intricate problems.</p><p>Offloading mobile processing is an excellent solution to augment mobile capabilities by migrating computation to powerful infrastructures. Current cloud computing environments for performing complex and data intensive computation remotely are likely to be an excellent solution for offloading computation and data processing from mobile devices restricted by reduced resources. This research uses cloud computing as processing platform for intensive-computation workloads while measuring energy consumption and response times on a Samsung Galaxy S5 Android mobile phone running Android 4.1OS.</p>
APA, Harvard, Vancouver, ISO, and other styles
23

FERLIN, EDSON PEDRO, HEITOR SILVÉRIO LOPES, CARLOS R. ERIG LIMA, and MAURÍCIO PERRETTO. "A FPGA-BASED RECONFIGURABLE PARALLEL ARCHITECTURE FOR HIGH-PERFORMANCE NUMERICAL COMPUTATION." Journal of Circuits, Systems and Computers 20, no. 05 (2011): 849–65. http://dx.doi.org/10.1142/s0218126611007645.

Full text
Abstract:
Many real-world engineering problems require high computational power, especially regarding the processing time. Current parallel processing techniques play an important role in reducing the processing time. Recently, reconfigurable computation has gained large attention thanks to its ability to combine hardware performance and software flexibility. Also, the availability of high-density Field Programmable Gate Array devices and corresponding development systems allowed the popularization of reconfigurable computation, encouraging the development of very complex, compact, and powerful systems for custom applications. This work presents an architecture for parallel reconfigurable computation based on the dataflow concept. This architecture allows reconfigurability of the system for many problems and, particularly, for numerical computation. Several experiments were done analyzing the scalability of the architecture, as well as comparing its performance with other approaches. Overall results are relevant and promising. The developed architecture has performance and scalability suited for engineering problems that demand intensive numerical computation.
APA, Harvard, Vancouver, ISO, and other styles
24

Dai, Penglin, Zihua Hang, Kai Liu, et al. "Multi-Armed Bandit Learning for Computation-Intensive Services in MEC-Empowered Vehicular Networks." IEEE Transactions on Vehicular Technology 69, no. 7 (2020): 7821–34. http://dx.doi.org/10.1109/tvt.2020.2991641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lee, Jooheung. "Self-reconfigurable approach for computation-intensive motion estimation algorithm in H.264/AVC." Optical Engineering 51, no. 4 (2012): 047008. http://dx.doi.org/10.1117/1.oe.51.4.047008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Desprez, Frédéric, and Antoine Vernois. "Simultaneous Scheduling of Replication and Computation for Data-Intensive Applications on the Grid." Journal of Grid Computing 4, no. 1 (2006): 19–31. http://dx.doi.org/10.1007/s10723-005-9016-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Steenackers, G., F. Presezniak, and P. Guillaume. "Development of an adaptive response surface method for optimization of computation-intensive models." Computers & Industrial Engineering 57, no. 3 (2009): 847–55. http://dx.doi.org/10.1016/j.cie.2009.02.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Babar, Mohammad, and Muhammad Sohail Khan. "ScalEdge: A framework for scalable edge computing in Internet of things–based smart systems." International Journal of Distributed Sensor Networks 17, no. 7 (2021): 155014772110353. http://dx.doi.org/10.1177/15501477211035332.

Full text
Abstract:
Edge computing brings down storage, computation, and communication services from the cloud server to the network edge, resulting in low latency and high availability. The Internet of things (IoT) devices are resource-constrained, unable to process compute-intensive tasks. The convergence of edge computing and IoT with computation offloading offers a feasible solution in terms of performance. Besides these, computation offload saves energy, reduces computation time, and extends the battery life of resource constrain IoT devices. However, edge computing faces the scalability problem, when IoT devices in large numbers approach edge for computation offloading requests. This research article presents a three-tier energy-efficient framework to address the scalability issue in edge computing. We introduced an energy-efficient recursive clustering technique at the IoT layer that prioritizes the tasks based on weight. Each selected task with the highest weight value offloads to the edge server for execution. A lightweight client–server architecture affirms to reduce the computation offloading overhead. The proposed energy-efficient framework for IoT algorithm makes efficient computation offload decisions while considering energy and latency constraints. The energy-efficient framework minimizes the energy consumption of IoT devices, decreases computation time and computation overhead, and scales the edge server. Numerical results show that the proposed framework satisfies the quality of service requirements of both delay-sensitive and delay-tolerant applications by minimizing energy and increasing the lifetime of devices.
APA, Harvard, Vancouver, ISO, and other styles
29

González, J. Solano, and D. I. Jonest. "Parallel computation of configuration space." Robotica 14, no. 2 (1996): 205–12. http://dx.doi.org/10.1017/s0263574700019111.

Full text
Abstract:
SUMMARYMany motion planning methods use Configuration Space to represent a robot manipulator's range of motion and the obstacles which exist in its environment. The Cartesian to Configuration Space mapping is computationally intensive and this paper describes how the execution time can be decreased by using parallel processing. The natural tree structure of the algorithm is exploited to partition the computation into parallel tasks. An implementation programmed in the occam2 parallel computer language running on a network of INMOS transputers is described. The benefits of dynamically scheduling the tasks onto the processors are explained and verified by means of measured execution times on various processor network topologies. It is concluded that excellent speed-up and efficiency can be achieved provided that proper account is taken of the variable task lengths in the computation.
APA, Harvard, Vancouver, ISO, and other styles
30

Abbas, Aamir, Ali Raza, Farhan Aadil, and Muazzam Maqsood. "Meta-heuristic-based offloading task optimization in mobile edge computing." International Journal of Distributed Sensor Networks 17, no. 6 (2021): 155014772110230. http://dx.doi.org/10.1177/15501477211023021.

Full text
Abstract:
With the recent advancements in communication technologies, the realization of computation-intensive applications like virtual/augmented reality, face recognition, and real-time video processing becomes possible at mobile devices. These applications require intensive computations for real-time decision-making and better user experience. However, mobile devices and Internet of things have limited energy and computational power. Executing such computationally intensive tasks on edge devices either leads to high computation latency or high energy consumption. Recently, mobile edge computing has been evolved and used for offloading these complex tasks. In mobile edge computing, Internet of things devices send their tasks to edge servers, which in turn perform fast computation. However, many Internet of things devices and edge server put an upper limit on concurrent task execution. Moreover, executing a very small size task (1 KB) over an edge server causes increased energy consumption due to communication. Therefore, it is required to have an optimal selection for tasks offloading such that the response time and energy consumption will become minimum. In this article, we proposed an optimal selection of offloading tasks using well-known metaheuristics, ant colony optimization algorithm, whale optimization algorithm, and Grey wolf optimization algorithm using variant design of these algorithms according to our problem through mathematical modeling. Executing multiple tasks at the server tends to provide high response time that leads to overloading and put additional latency at task computation. We also graphically represent the tradeoff between energy and delay that, how both parameters are inversely proportional to each other, using values from simulation. Results show that Grey wolf optimization outperforms the others in terms of optimizing energy consumption and execution latency while selected optimal set of offloading tasks.
APA, Harvard, Vancouver, ISO, and other styles
31

Chen, Shuang, Ying Chen, Xin Chen, and Yuemei Hu. "Distributed Task Offloading Game in Multiserver Mobile Edge Computing Networks." Complexity 2020 (May 4, 2020): 1–14. http://dx.doi.org/10.1155/2020/7016307.

Full text
Abstract:
With the explosion of data traffic, mobile edge computing (MEC) has emerged to solve the problem of high time delay and energy consumption. In order to cope with a large number of computing tasks, the deployment of edge servers is increasingly intensive. Thus, server service areas overlap. We focus on mobile users in overlapping service areas and study the problem of computation offloading for these users. In this paper, we consider a multiuser offloading scenario with intensive deployment of edge servers. In addition, we divide the offloading process into two stages, namely, data transmission and computation execution, in which channel interference and resource preemption are considered, respectively. We apply the noncooperative game method to model and prove the existence of Nash equilibrium (NE). The real-time update computation offloading algorithm (RUCO) is proposed to obtain equilibrium offloading strategies. Due to the high complexity of the RUCO algorithm, the multiuser probabilistic offloading decision (MPOD) algorithm is proposed to improve this problem. We evaluate the performance of the MPOD algorithm through experiments. The experimental results show that the MPOD algorithm can converge after a limited number of iterations and can obtain the offloading strategy with lower cost.
APA, Harvard, Vancouver, ISO, and other styles
32

Allouche, Mohamed, Tarek Frikha, Mihai Mitrea, Gérard Memmi, and Faten Chaabane. "Lightweight Blockchain Processing. Case Study: Scanned Document Tracking on Tezos Blockchain." Applied Sciences 11, no. 15 (2021): 7169. http://dx.doi.org/10.3390/app11157169.

Full text
Abstract:
To bridge the current gap between the Blockchain expectancies and their intensive computation constraints, the present paper advances a lightweight processing solution, based on a load-balancing architecture, compatible with the lightweight/embedding processing paradigms. In this way, the execution of complex operations is securely delegated to an off-chain general-purpose computing machine while the intimate Blockchain operations are kept on-chain. The illustrations correspond to an on-chain Tezos configuration and to a multiprocessor ARM embedded platform (integrated into a Raspberry Pi). The performances are assessed in terms of security, execution time, and CPU consumption when achieving a visual document fingerprint task. It is thus demonstrated that the advanced solution makes it possible for a computing intensive application to be deployed under severely constrained computation and memory resources, as set by a Raspberry Pi 3. The experimental results show that up to nine Tezos nodes can be deployed on a single Raspberry Pi 3 and that the limitation is not derived from the memory but from the computation resources. The execution time with a limited number of fingerprints is 40% higher than using a classical PC solution (value computed with 95% relative error lower than 5%).
APA, Harvard, Vancouver, ISO, and other styles
33

Gu, Xiaohui, Li Jin, Nan Zhao, and Guoan Zhang. "Energy-Efficient Computation Offloading and Transmit Power Allocation Scheme for Mobile Edge Computing." Mobile Information Systems 2019 (December 16, 2019): 1–9. http://dx.doi.org/10.1155/2019/3613250.

Full text
Abstract:
Mobile edge computing (MEC) is considered a promising technique that prolongs battery life and enhances the computation capacity of mobile devices (MDs) by offloading computation-intensive tasks to the resource-rich cloud located at the edges of mobile networks. In this study, the problem of energy-efficient computation offloading with guaranteed performance in multiuser MEC systems was investigated. Given that MDs typically seek lower energy consumption and improve the performance of computing tasks, we provide an energy-efficient computation offloading and transmit power allocation scheme that reduces energy consumption and completion time. We formulate the energy efficiency cost minimization problem, which satisfies the completion time deadline constraint of MDs in an MEC system. In addition, the corresponding Karush–Kuhn–Tucker conditions are applied to solve the optimization problem, and a new algorithm comprising the computation offloading policy and transmission power allocation is presented. Numerical results demonstrate that our proposed scheme, with the optimal computation offloading policy and adapted transmission power for MDs, outperforms local computing and full offloading methods in terms of energy consumption and completion delay. Consequently, our proposed system could help overcome the restrictions on computation resources and battery life of mobile devices to meet the requirements of new applications.
APA, Harvard, Vancouver, ISO, and other styles
34

Shi, Yongpeng, Yujie Xia, and Ya Gao. "Cross-Server Computation Offloading for Multi-Task Mobile Edge Computing." Information 11, no. 2 (2020): 96. http://dx.doi.org/10.3390/info11020096.

Full text
Abstract:
As an emerging network architecture and technology, mobile edge computing (MEC) can alleviate the tension between the computation-intensive applications and the resource-constrained mobile devices. However, most available studies on computation offloading in MEC assume that the edge severs host various applications and can cope with all kinds of computation tasks, ignoring limited computing resources and storage capacities of the MEC architecture. To make full use of the available resources deployed on the edge servers, in this paper, we study the cross-server computation offloading problem to realize the collaboration among multiple edge servers for multi-task mobile edge computing, and propose a greedy approximation algorithm as our solution to minimize the overall consumed energy. Numerical results validate that our proposed method can not only give near-optimal solutions with much higher computational efficiency, but also scale well with the growing number of mobile devices and tasks.
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Zhiyuan, and Ershuai Peng. "Software-Defined Optimal Computation Task Scheduling in Vehicular Edge Networking." Sensors 21, no. 3 (2021): 955. http://dx.doi.org/10.3390/s21030955.

Full text
Abstract:
With the development of smart vehicles and various vehicular applications, Vehicular Edge Computing (VEC) paradigm has attracted from academic and industry. Compared with the cloud computing platform, VEC has several new features, such as the higher network bandwidth and the lower transmission delay. Recently, vehicular computation-intensive task offloading has become a new research field for the vehicular edge computing networks. However, dynamic network topology and the bursty computation tasks offloading, which causes to the computation load unbalancing for the VEC networking. To solve this issue, this paper proposed an optimal control-based computing task scheduling algorithm. Then, we introduce software defined networking/OpenFlow framework to build a software-defined vehicular edge networking structure. The proposed algorithm can obtain global optimum results and achieve the load-balancing by the virtue of the global load status information. Besides, the proposed algorithm has strong adaptiveness in dynamic network environments by automatic parameter tuning. Experimental results show that the proposed algorithm can effectively improve the utilization of computation resources and meet the requirements of computation and transmission delay for various vehicular tasks.
APA, Harvard, Vancouver, ISO, and other styles
36

Gopnik, Alison. "Rational constructivism: A new way to bridge rationalism and empiricism." Behavioral and Brain Sciences 32, no. 2 (2009): 208–9. http://dx.doi.org/10.1017/s0140525x0900096x.

Full text
Abstract:
AbstractRecent work in rational probabilistic modeling suggests that a kind of propositional reasoning is ubiquitous in cognition and especially in cognitive development. However, there is no reason to believe that this type of computation is necessarily conscious or resource-intensive.
APA, Harvard, Vancouver, ISO, and other styles
37

Xu, Bing Li, Hui Lin, Wei Ning Cui, Ya Hu, Jun Zhu, and Sammy Tang. "CUGrid and VGE Based Air Pollution Dispersion Simulation." Advanced Materials Research 143-144 (October 2010): 1305–10. http://dx.doi.org/10.4028/www.scientific.net/amr.143-144.1305.

Full text
Abstract:
Air pollution dispersion is a typical geographic process. A reasonable way to simulate air pollution dispersion is to modeling the dispersion and presents the results in a geographically referenced virtual environment. In this research, we apply the concept of virtual geographic environments (VGE), which is coined in the community of geographic information science, to facilitate air pollution dispersion by integrating MM5 and VGE. Because MM5 is computation intensive, the CUGrid is used in this research to decrease the computation time. Our research focuses on three key points, which are the platform design, MM5 integration and computation on CUGrid, and geographical visualization of air pollution dispersion in VGE. Based on the prototype system, a case of simulating air pollution dispersion in Pearl River Delta is employed to validate and test the rationality of the methodology. As shown in this case study, VGE can provide a good way to visualize air pollution dispersion and the CUGrid can decrease model computation time significantly.
APA, Harvard, Vancouver, ISO, and other styles
38

Jianjun, Wang, and Li Li. "Computation Effects of Restructuring China’s Energy Intensive Industries CO2 Emissions Based on STRIPAT Model." Journal of Computational and Theoretical Nanoscience 12, no. 12 (2015): 6260–64. http://dx.doi.org/10.1166/jctn.2015.4664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Long, Teng, Li Liu, and Lei Peng. "Global Optimization Method with Enhanced Adaptive Response Surface Method for Computation-Intensive Design Problems." Advanced Science Letters 5, no. 2 (2012): 881–87. http://dx.doi.org/10.1166/asl.2012.1847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Steenackers, G., R. Versluys, M. Runacres, and P. Guillaume. "Reliability-based design optimization of computation-intensive models making use of response surface models." Quality and Reliability Engineering International 27, no. 4 (2010): 555–68. http://dx.doi.org/10.1002/qre.1166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Meena, V., Obulaporam Gireesha, Kannan Krithivasan, and V. S. Shankar Sriram. "Fuzzy simplified swarm optimization for multisite computational offloading in mobile cloud computing." Journal of Intelligent & Fuzzy Systems 39, no. 6 (2020): 8285–97. http://dx.doi.org/10.3233/jifs-189148.

Full text
Abstract:
Mobile Cloud Computing (MCC)’s rapid technological advancements facilitate various computational-intensive applications on smart mobile devices. However, such applications are constrained by limited processing power, energy consumption, and storage capacity of smart mobile devices. To mitigate these issues, computational offloading is found to be the one of the promising techniques as it offloads the execution of computation-intensive applications to cloud resources. In addition, various kinds of cloud services and resourceful servers are available to offload computationally intensive tasks. However, their processing speeds, access delays, computation capability, residual memory and service charges are different which retards their usage, as it becomes time-consuming and ambiguous for making decisions. To address the aforementioned issues, this paper presents a Fuzzy Simplified Swarm Optimization based cloud Computational Offloading (FSSOCO) algorithm to achieve optimum multisite offloading. Fuzzy logic and simplified swarm optimization are employed for the identification of high powerful nodes and task decomposition respectively. The overall performance of FSSOCO is validated using the Specjvm benchmark suite and compared with the state-of-the-art offloading techniques in terms of the weighted total cost, energy consumption, and processing time.
APA, Harvard, Vancouver, ISO, and other styles
42

Mansouri, Najme. "Improve the Performance of Data Grids by Cost-Based Job Scheduling Strategy." Computer Engineering and Applications Journal 3, no. 2 (2014): 100–111. http://dx.doi.org/10.18495/comengapp.v3i2.52.

Full text
Abstract:
Grid environments have gain tremendous importance in recent years since application requirements increased drastically. The heterogeneity and geographic dispersion of grid resources and applications places some complex problems such as job scheduling. Most existing scheduling strategies in Grids only focus on one kind of Grid jobs which can be data-intensive or computation-intensive. However, only considering one kind of jobs in scheduling does not result in suitable scheduling in the viewpoint of all system, and sometimes causes wasting of resources on the other side. To address the challenge of simultaneously considering both kinds of jobs, a new Cost-Based Job Scheduling (CJS) strategy is proposed in this paper. At one hand, CJS algorithm considers both data and computational resource availability of the network, and on the other hand, considering the corresponding requirements of each job, it determines a value called W to the job. Using the W value, the importance of two aspects (being data or computation intensive) for each job is determined, and then the job is assigned to the available resources. The simulation results with OptorSim show that CJS outperforms comparing to the existing algorithms mentioned in literature as number of jobs increases.
APA, Harvard, Vancouver, ISO, and other styles
43

AHARONI, GAD, AMNON BARAK, and AMIR RONEN. "A competitive algorithm for managing sharing in the distributed execution of functional programs." Journal of Functional Programming 7, no. 4 (1997): 421–40. http://dx.doi.org/10.1017/s095679689700275x.

Full text
Abstract:
Execution of functional programs on distributed-memory multiprocessors gives rise to the problem of evaluating expressions that are shared between several Processing Elements (PEs). One of the main difficulties of solving this problem is that, for a given shared expression, it is not known in advance whether realizing the sharing is more cost effective than duplicating its evaluation. Realizing the sharing requires coordination between the sharing PEs to ensure that the shared expression is evaluated only once. This coordination involves relatively high communication costs, and is therefore only worthwhile when the shared expressions require much computation time to evaluate. In contrast, when the shared expression is not computation intensive, it is more cost effective to duplicate the evaluation, and thus avoid the communication overhead costs. This dilemma of deciding whether to duplicate the work or to realize the sharing stems from the unknown computation time that is required to evaluate a shared expression. This computation time is difficult to estimate due to unknown run-time evolution of loops and recursion that may be part of the expression. This paper presents an on-line (run-time) algorithm that decides which of the expressions that are shared between several PEs should be evaluated only once, and which expressions should be evaluated locally by each sharing PE. By applying competitive considerations, the algorithm manages to exploit sharing of computation-intensive expressions, while it duplicates the evaluation of expressions that require little time to compute. The algorithm accomplishes this goal even though it has no a priori knowledge of the amount of computation that is required to evaluate the shared expression. We show that this algorithm is competitive with a hypothetical optimal off-line algorithm, which does have such knowledge, and we prove that the algorithm is deadlock free. Furthermore, this algorithm does not require any programmer intervention, it has low overhead, and it is designed to run on a wide variety of distributed systems.
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Xianwei, and Baoliu Ye. "Latency-Aware Computation Offloading for 5G Networks in Edge Computing." Security and Communication Networks 2021 (September 22, 2021): 1–15. http://dx.doi.org/10.1155/2021/8800234.

Full text
Abstract:
With the development of Internet of Things, massive computation-intensive tasks are generated by mobile devices whose limited computing and storage capacity lead to poor quality of services. Edge computing, as an effective computing paradigm, was proposed for efficient and real-time data processing by providing computing resources at the edge of the network. The deployment of 5G promises to speed up data transmission but also further increases the tasks to be offloaded. However, how to transfer the data or tasks to the edge servers in 5G for processing with high response efficiency remains a challenge. In this paper, a latency-aware computation offloading method in 5G networks is proposed. Firstly, the latency and energy consumption models of edge computation offloading in 5G are defined. Then the fine-grained computation offloading method is employed to reduce the overall completion time of the tasks. The approach is further extended to solve the multiuser computation offloading problem. To verify the effectiveness of the proposed method, extensive simulation experiments are conducted. The results show that the proposed offloading method can effectively reduce the execution latency of the tasks.
APA, Harvard, Vancouver, ISO, and other styles
45

Shan, Nanliang, Yu Li, and Xiaolong Cui. "A Multilevel Optimization Framework for Computation Offloading in Mobile Edge Computing." Mathematical Problems in Engineering 2020 (June 27, 2020): 1–17. http://dx.doi.org/10.1155/2020/4124791.

Full text
Abstract:
Mobile edge computing is a new computing paradigm that can extend cloud computing capabilities to the edge network, supporting computation-intensive applications such as face recognition, natural language processing, and augmented reality. Notably, computation offloading is a key technology of mobile edge computing to improve mobile devices’ performance and users’ experience by offloading local tasks to edge servers. In this paper, the problem of computation offloading under multiuser, multiserver, and multichannel scenarios is researched, and a computation offloading framework is proposed that considering the quality of service (QoS) of users, server resources, and channel interference. This framework consists of three levels. (1) In the offloading decision stage, the offloading decision is made based on the beneficial degree of computation offloading, which is measured by the total cost of the local computing of mobile devices in comparison with the edge-side server. (2) In the edge server selection stage, the candidate is comprehensively evaluated and selected by a multiobjective decision based on the Analytic Hierarchy Process based on Covariance (Cov-AHP) for computation offloading. (3) In the channel selection stage, a multiuser and multichannel distributed computation offloading strategy based on the potential game is proposed by considering the influence of channel interference on the user’s overall overhead. The corresponding multiuser and multichannel task scheduling algorithm is designed to maximize the overall benefit by finding the Nash equilibrium point of the potential game. Amounts of experimental results show that the proposed framework can greatly increase the number of beneficial computation offloading users and effectively reduce the energy consumption and time delay.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhou, Wenchen, Weiwei Fang, Yangyang Li, Bo Yuan, Yiming Li, and Tian Wang. "Markov Approximation for Task Offloading and Computation Scaling in Mobile Edge Computing." Mobile Information Systems 2019 (January 23, 2019): 1–12. http://dx.doi.org/10.1155/2019/8172698.

Full text
Abstract:
Mobile edge computing (MEC) provides cloud-computing services for mobile devices to offload intensive computation tasks to the physically proximal MEC servers. In this paper, we consider a multiserver system where a single mobile device asks for computation offloading to multiple nearby servers. We formulate this offloading problem as the joint optimization of computation task assignment and CPU frequency scaling, in order to minimize a tradeoff between task execution time and mobile energy consumption. The resulting optimization problem is combinatorial in essence, and the optimal solution generally can only be obtained by exhaustive search with extremely high complexity. Leveraging the Markov approximation technique, we propose a light-weight algorithm that can provably converge to a bounded near-optimal solution. The simulation results show that the proposed algorithm is able to generate near-optimal solutions and outperform other benchmark algorithms.
APA, Harvard, Vancouver, ISO, and other styles
47

Mansouri, Najme. "A Hybrid Approach for Scheduling based on Multi-criteria Decision Method in Data Grid." Computer Engineering and Applications Journal 3, no. 1 (2014): 1–12. http://dx.doi.org/10.18495/comengapp.v3i1.44.

Full text
Abstract:
Grid computing environments have emerged following the demand of scientists to have a very high computing power and storage capacity. One among the challenges imposed in the use of these environments is the performance problem. To improve performance, scheduling technique is used. Most existing scheduling strategies in Grids only focus on one kind of Grid jobs which can be data-intensive or computation-intensive. However, only considering one kind of jobs in scheduling does not result in suitable scheduling in the viewpoint of all system, and sometimes causes wasting of resources on the other side. To address the challenge of simultaneously considering both kinds of jobs, a new Hybrid Job Scheduling (HJS) strategy is proposed in this paper. At one hand, HJS algorithm considers both data and computational resource availability of the network, and on the other hand, considering the corresponding requirements of each job, it determines a value called W to the job. Using the W value, the importance of two aspects (being data or computation intensive) for each job is determined, and then the job is assigned to the available resources. The simulation results with OptorSim show that HJS outperforms comparing to the existing algorithms mentioned in literature as number of jobs increases.
APA, Harvard, Vancouver, ISO, and other styles
48

DENG, YI, and SHI-KUO CHANG. "A FRAMEWORK FOR THE MODELING AND PROTOTYPING OF DISTRIBUTED INFORMATION SYSTEMS." International Journal of Software Engineering and Knowledge Engineering 01, no. 03 (1991): 203–26. http://dx.doi.org/10.1142/s0218194091000172.

Full text
Abstract:
The major issues in designing Distributed Information Systems (DIS) include localization of control and data, inherent concurrency, intensive interactions among computation agents, history sensitivity, dynamic configuration and continuous system change and evolution. We propose a framework, called the G-Net model, for the specification, modeling and prototyping of DIS. The G-Net model not only provides a flexible notation to represent the executable specification of DIS through G-Net instantiation, but also offers a novel style of decentralized concurrent computation, allowing flexible inter-agent communication and interaction. A prototype of the G-Net framework has been implemented on workstations connected by a local network.
APA, Harvard, Vancouver, ISO, and other styles
49

Ryu, June-Woo, Quoc-Viet Pham, Huynh N. T. Luan, Won-Joo Hwang, Jong-Deok Kim, and Jung-Tae Lee. "Multi-Access Edge Computing Empowered Heterogeneous Networks: A Novel Architecture and Potential Works." Symmetry 11, no. 7 (2019): 842. http://dx.doi.org/10.3390/sym11070842.

Full text
Abstract:
One of the most promising approaches to address the mismatch between computation- intensive applications and computation-limited end devices is multi-access edge computing (MEC). To overcome the rapid increase in traffic volume and offload the traffic from macrocells, a massive number of small cells have been deployed, so-called heterogeneous networks (HetNets). Strongly motivated by the close integration of MEC and HetNets, in this paper, we propose an envisioned architecture of MEC-empowered HetNets, where both wireless and wired backhaul solutions are supported, flying base stations (BSs) can be equipped with MEC servers, and mobile users (MUs) need both communication and computation resources for their computationally heavy tasks. Subsequently, we provide the research progress summary of task offloading and resource allocation in the proposed MEC-empowered unmanned aerial vehicle (UAV)-assisted heterogeneous networks. We complete this article by spotlighting key challenges and open future directives for researches.
APA, Harvard, Vancouver, ISO, and other styles
50

Sun, Jianan, Qing Gu, Tao Zheng, Ping Dong, and Yajuan Qin. "Joint communication and computing resource allocation in vehicular edge computing." International Journal of Distributed Sensor Networks 15, no. 3 (2019): 155014771983785. http://dx.doi.org/10.1177/1550147719837859.

Full text
Abstract:
The emergence of computation-intensive vehicle applications poses a significant challenge to the limited computation capacity of on-board equipments. Mobile edge computing has been recognized as a promising paradigm to provide high-performance vehicle services by offloading the applications to edge servers. However, it is still a challenge to efficiently utilize the available resources of vehicle nodes. In this article, we introduce mobile edge computing technology to vehicular ad hoc network to build a vehicular edge computing system, which provides a wide range of reliable services by utilizing the computing resources of vehicles on the road. Then, we study the computation offloading decision problem in this system and propose a novel multi-objective vehicular edge computing task scheduling algorithm which jointly optimizes the allocation of communication and computing resources. Extensive performance evaluation demonstrates that the proposed algorithm can effectively shorten the task execution time and has high reliability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!