To see the other types of publications on this topic, follow the link: Memory offloading.

Journal articles on the topic 'Memory offloading'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Memory offloading.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Grinschgl, Sandra, Frank Papenmeier, and Hauke S. Meyerhoff. "Consequences of cognitive offloading: Boosting performance but diminishing memory." Quarterly Journal of Experimental Psychology 74, no. 9 (2021): 1477–96. http://dx.doi.org/10.1177/17470218211008060.

Full text
Abstract:
Modern technical tools such as tablets allow for the temporal externalisation of working memory processes (i.e., cognitive offloading). Although such externalisations support immediate performance on different tasks, little is known about potential long-term consequences of offloading behaviour. In the current set of experiments, we studied the relationship between cognitive offloading and subsequent memory for the offloaded information as well as the interplay of this relationship with the goal to acquire new memory representations. Our participants solved the Pattern Copy Task, in which we manipulated the costs of cognitive offloading and the awareness of a subsequent memory test. In Experiment 1 ( N = 172), we showed that increasing the costs for offloading induces reduced offloading behaviour. This reduction in offloading came along with lower immediate task performance but more accurate memory in an unexpected test. In Experiment 2 ( N = 172), we confirmed these findings and observed that offloading behaviour remained detrimental for subsequent memory performance when participants were aware of the upcoming memory test. Interestingly, Experiment 3 ( N = 172) showed that cognitive offloading is not detrimental for long-term memory formation under all circumstances. Those participants who were forced to offload maximally but were aware of the memory test could almost completely counteract the negative impact of offloading on memory. Our experiments highlight the importance of the explicit goal to acquire new memory representations when relying on technical tools as offloading did have detrimental effects on memory without such a goal.
APA, Harvard, Vancouver, ISO, and other styles
2

Kelly, Megan O., and Evan F. Risko. "Offloading memory: Serial position effects." Psychonomic Bulletin & Review 26, no. 4 (2019): 1347–53. http://dx.doi.org/10.3758/s13423-019-01615-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Risko, E. F., M. O. Kelly, P. Patel, and C. Gaspar. "Offloading memory leaves us vulnerable to memory manipulation." Cognition 191 (October 2019): 103954. http://dx.doi.org/10.1016/j.cognition.2019.04.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tarde, Yashada, and Prema Joshi. "A Study of Effect of Cognitive Offloading on Instant Performance and Metamemory in Short-Term Task." Indian Journal of Behavioural Sciences 26, no. 02 (2023): 85–94. http://dx.doi.org/10.55229/ijbs.v26i2.03.

Full text
Abstract:
Background: Cognitive offloading reduces requirement of mental processing by use of physical actions like setting reminders. Increasing use of smart-gadgets for externalization of memory is make us more vulnerable to develop tendency of offloading. Our young generation has resorted to use of offloading right from their childhood due to leisure handling of gadgets, without metamemory evaluation. Recurrent use of offloading might reduce their efficiency to utilize their thought process whenever required.
 Objective: To compare and assess performance of adults and adolescents in instant task and short-term task including cost-benefit evaluation with and without offloading.
 Material & Methods Present study was conducted on 186 participants divided into two age groups: adults(18-40 years), adolescents(13-17 years) after taking appropriate written informed consent. Study commenced after approval from ethics committee. Two groups were divided into offloading permitted and not permitted based on randomized computer-generated-sequence. This cross-sectional study analyzed performance in colour block test, forward digit recall and backward digit recall1 in two sets, one immediately after sequence was portrayed and other, half-an-hour later. Cost-benefit evaluation was also assessed. Working memory assessment to compare between two age groups was done. Statistical analysis was done using paired and unpaired T-test.
 Result: Significant increase in instant performance and decline in short-term performance was seen in offloaded group. The performance in offloaded group significantly decreased on adding cost-benefit evaluation. Reduction in working memory performance was seen in younger age group.
 Interpretation & Conclusion: Cognitive offloading increases the instant performance but adversely affects development of long-term and working memory.
APA, Harvard, Vancouver, ISO, and other styles
5

Zulfa, Mulki Indana, Rudy Hartanto, Adhistya Erna Permanasari, and Waleed Ali. "LRU-GENACO: A Hybrid Cached Data Optimization Based on the Least Used Method Improved Using Ant Colony and Genetic Algorithms." Electronics 11, no. 19 (2022): 2978. http://dx.doi.org/10.3390/electronics11192978.

Full text
Abstract:
An optimization strategy for cached data offloading plays a crucial role in the edge network environment. This strategy can improve the performance of edge nodes with limited cache memory to serve data service requests from user terminals. The main challenge that must be solved in optimizing cached data offloading is assessing and selecting the cached data with the highest profit to be stored in the cache memory. Selecting the appropriate cached data can improve the utility of memory space to increase HR and reduce LSR. In this paper, we model the cached data offloading optimization strategy as the classic optimization KP01. The cached data offloading optimization strategy is then improved using a hybrid approach of three algorithms: LRU, ACO, and GA, called LRU-GENACO. The proposed LRU-GENACO was tested using four real proxy log datasets from IRCache. The simulation results show that the proposed LRU-GENACO hit ratio is superior to the LRU GDS SIZE algorithms by 13.1%, 26.96%, 53.78%, and 81.69%, respectively. The proposed LRU-GENACO method also reduces the average latency by 25.27%.
APA, Harvard, Vancouver, ISO, and other styles
6

Kelly, Megan O., and Evan F. Risko. "The isolation effect when offloading memory." Journal of Applied Research in Memory and Cognition 8, no. 4 (2019): 471–80. http://dx.doi.org/10.1037/h0101842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kelly, Megan O., and Evan F. Risko. "The Isolation Effect When Offloading Memory." Journal of Applied Research in Memory and Cognition 8, no. 4 (2019): 471–80. http://dx.doi.org/10.1016/j.jarmac.2019.10.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lu, Baotong, Kaisong Huang, Chieh-Jan Mike Liang, Tianzheng Wang, and Eric Lo. "DEX: Scalable Range Indexing on Disaggregated Memory." Proceedings of the VLDB Endowment 17, no. 10 (2024): 2603–16. http://dx.doi.org/10.14778/3675034.3675050.

Full text
Abstract:
Memory disaggregation can potentially allow memory-optimized range indexes such as B+-trees to scale beyond one machine while attaining high hardware utilization and low cost. Designing scalable indexes on disaggregated memory, however, is challenging due to rudimentary caching, unprincipled offloading and excessive inconsistency among servers. This paper proposes DEX, a new scalable B+-tree for memory disaggregation. DEX includes a set of techniques to reduce remote accesses, including logical partitioning, lightweight caching and cost-aware offloading. Our evaluation shows that DEX can outperform the state-of-the-art by 1.7--56.3×, and the advantage remains under various setups, such as cache size and skewness.
APA, Harvard, Vancouver, ISO, and other styles
9

Soares, Julia S., and Benjamin C. Storm. "Exploring functions of and recollections with photos in the age of smartphone cameras." Memory Studies 15, no. 2 (2021): 287–303. http://dx.doi.org/10.1177/17506980211044712.

Full text
Abstract:
People often report taking photos to aid memory. Two mixed-method surveys were used to investigate participants’ reasons for taking photos, focusing specifically on memory-related reasons, which were split into two sub-types: photos taken as mementos, and photos taken as a means of offloading information. Participants reported their motivations for taking a sample of photos and then rated their recollective experience of each photographed event. Across both studies, participants reported recollecting events associated with a memento goal more vividly, more positively, and with more emotional intensity than events associated with an offloading goal. As expected, events photographed with a memento goal were also rated by participants to be more reflective of a shared memory system between the participants and the camera than were events photographed with an offloading goal. These findings suggest that people’s motivations when taking photos tend to be associated with different types of recollective experiences, as well as different judgments about where personal information is located in a blended human-camera memory system.
APA, Harvard, Vancouver, ISO, and other styles
10

Soares, Julia S., and Benjamin C. Storm. "Exploring functions of and recollections with photos in the age of smartphone cameras." Memory Studies 15, no. 2 (2021): 287–303. http://dx.doi.org/10.1177/17506980211044712.

Full text
Abstract:
People often report taking photos to aid memory. Two mixed-method surveys were used to investigate participants’ reasons for taking photos, focusing specifically on memory-related reasons, which were split into two sub-types: photos taken as mementos, and photos taken as a means of offloading information. Participants reported their motivations for taking a sample of photos and then rated their recollective experience of each photographed event. Across both studies, participants reported recollecting events associated with a memento goal more vividly, more positively, and with more emotional intensity than events associated with an offloading goal. As expected, events photographed with a memento goal were also rated by participants to be more reflective of a shared memory system between the participants and the camera than were events photographed with an offloading goal. These findings suggest that people’s motivations when taking photos tend to be associated with different types of recollective experiences, as well as different judgments about where personal information is located in a blended human-camera memory system.
APA, Harvard, Vancouver, ISO, and other styles
11

Zaman, Sardar Khaliq uz, Ali Imran Jehangiri, Tahir Maqsood, et al. "COME-UP: Computation Offloading in Mobile Edge Computing with LSTM Based User Direction Prediction." Applied Sciences 12, no. 7 (2022): 3312. http://dx.doi.org/10.3390/app12073312.

Full text
Abstract:
In mobile edge computing (MEC), mobile devices limited to computation and memory resources offload compute-intensive tasks to nearby edge servers. User movement causes frequent handovers in 5G urban networks. The resultant delays in task execution due to unknown user position and base station lead to increased energy consumption and resource wastage. The current MEC offloading solutions separate computation offloading from user mobility. For task offloading, techniques that predict the user’s future location do not consider user direction. We propose a framework termed COME-UP Computation Offloading in mobile edge computing with Long-short term memory (LSTM) based user direction prediction. The nature of the mobility data is nonlinear and leads to a time series prediction problem. The LSTM considers the previous mobility features, such as location, velocity, and direction, as input to a feed-forward mechanism to train the learning model and predict the next location. The proposed architecture also uses a fitness function to calculate priority weights for selecting an optimum edge server for task offloading based on latency, energy, and server load. The simulation results show that the latency and energy consumption of COME-UP are lower than the baseline techniques, while the edge server utilization is enhanced.
APA, Harvard, Vancouver, ISO, and other styles
12

Gilbert, Sam J. "Memory Augmentation, Cognitive Offloading, and Digital Technology." Psychological Inquiry 35, no. 2 (2024): 110–12. http://dx.doi.org/10.1080/1047840x.2024.2384129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Nai, Lifeng, Ramyad Hadidi, He Xiao, Hyojong Kim, Jaewoong Sim, and Hyesoon Kim. "Thermal-aware processing-in-memory instruction offloading." Journal of Parallel and Distributed Computing 130 (August 2019): 193–207. http://dx.doi.org/10.1016/j.jpdc.2019.03.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Siyuan, Zhuofeng Wang, Zelong Guan, Yudong Liu, and Phillip B. Gibbons. "Practical Offloading for Fine-Tuning LLM on Commodity GPU via Learned Sparse Projectors." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 22 (2025): 23614–22. https://doi.org/10.1609/aaai.v39i22.34531.

Full text
Abstract:
Fine-tuning large language models (LLMs) requires significant memory, often exceeding the capacity of a single GPU. A common solution to this memory challenge is offloading compute and data from the GPU to the CPU. However, this approach is hampered by the limited bandwidth of commodity hardware, which constrains communication between the CPU and GPU, and by slower matrix multiplications on the CPU. In this paper, we present an offloading framework, LSP-Offload, that enables near-native speed LLM fine-tuning on commodity hardware through learned sparse projectors. Our data-driven approach involves learning efficient sparse compressors that minimize communication with minimal precision loss. Additionally, we introduce a novel layer-wise communication schedule to maximize parallelism between communication and computation. As a result, our framework can fine-tune a 1.3 billion parameter model on a 4GB laptop GPU and a 6.7 billion parameter model on an NVIDIA RTX 4090 GPU with 24GB memory. Compared to state-of-the-art offloading frameworks, our approach reduces end-to-end fine-tuning time by 33.1%-62.5% when converging to the same accuracy.
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Yanpei, Wei Huang, Liping Wang, Yunjing Zhu, and Ningning Chen. "Dynamic computation offloading algorithm based on particle swarm optimization with a mutation operator in multi-access edge computing." Mathematical Biosciences and Engineering 18, no. 6 (2021): 9163–89. http://dx.doi.org/10.3934/mbe.2021452.

Full text
Abstract:
<abstract> <p>The current computation offloading algorithm for the mobile cloud ignores the selection of offloading opportunities and does not consider the uninstall frequency, resource waste, and energy efficiency reduction of the user's offloading success probability. Therefore, in this study, a dynamic computation offloading algorithm based on particle swarm optimization with a mutation operator in a multi-access edge computing environment is proposed (DCO-PSOMO). According to the CPU utilization and the memory utilization rate of the mobile terminal, this method can dynamically obtain the overload time by using a strong, locally weighted regression method. After detecting the overload time, the probability of successful downloading is predicted by the mobile user's dwell time and edge computing communication range, and the offloading is either conducted immediately or delayed. A computation offloading model was established via the use of the response time and energy consumption of the mobile terminal. Additionally, the optimal computing offloading algorithm was designed via the use of a particle swarm with a mutation operator. Finally, the DCO-PSOMO algorithm was compared with the JOCAP, ECOMC and ESRLR algorithms, and the experimental results demonstrated that the DCO-PSOMO offloading method can effectively reduce the offloading cost and terminal energy consumption, and improves the success probability of offloading and the user's QoS.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
16

Khan, Kamil, Sudeep Pasricha, and Ryan Gary Kim. "A Survey of Resource Management for Processing-In-Memory and Near-Memory Processing Architectures." Journal of Low Power Electronics and Applications 10, no. 4 (2020): 30. http://dx.doi.org/10.3390/jlpea10040030.

Full text
Abstract:
Due to the amount of data involved in emerging deep learning and big data applications, operations related to data movement have quickly become a bottleneck. Data-centric computing (DCC), as enabled by processing-in-memory (PIM) and near-memory processing (NMP) paradigms, aims to accelerate these types of applications by moving the computation closer to the data. Over the past few years, researchers have proposed various memory architectures that enable DCC systems, such as logic layers in 3D-stacked memories or charge-sharing-based bitwise operations in dynamic random-access memory (DRAM). However, application-specific memory access patterns, power and thermal concerns, memory technology limitations, and inconsistent performance gains complicate the offloading of computation in DCC systems. Therefore, designing intelligent resource management techniques for computation offloading is vital for leveraging the potential offered by this new paradigm. In this article, we survey the major trends in managing PIM and NMP-based DCC systems and provide a review of the landscape of resource management techniques employed by system designers for such systems. Additionally, we discuss the future challenges and opportunities in DCC management.
APA, Harvard, Vancouver, ISO, and other styles
17

Xu, Shilin, and Caili Guo. "Computation Offloading in a Cognitive Vehicular Networks with Vehicular Cloud Computing and Remote Cloud Computing." Sensors 20, no. 23 (2020): 6820. http://dx.doi.org/10.3390/s20236820.

Full text
Abstract:
To satisfy the explosive growth of computation-intensive vehicular applications, we investigated the computation offloading problem in a cognitive vehicular networks (CVN). Specifically, in our scheme, the vehicular cloud computing (VCC)- and remote cloud computing (RCC)-enabled computation offloading were jointly considered. So far, extensive research has been conducted on RCC-based computation offloading, while the studies on VCC-based computation offloading are relatively rare. In fact, due to the dynamic and uncertainty of on-board resource, the VCC-based computation offloading is more challenging then the RCC one, especially under the vehicular scenario with expensive inter-vehicle communication or poor communication environment. To solve this problem, we propose to leverage the VCC’s computation resource for computation offloading with a perception-exploitation way, which mainly comprise resource discovery and computation offloading two stages. In resource discovery stage, upon the action-observation history, a Long Short-Term Memory (LSTM) model is proposed to predict the on-board resource utilizing status at next time slot. Thereafter, based on the obtained computation resource distribution, a decentralized multi-agent Deep Reinforcement Learning (DRL) algorithm is proposed to solve the collaborative computation offloading with VCC and RCC. Last but not least, the proposed algorithms’ effectiveness is verified with a host of numerical simulation results from different perspectives.
APA, Harvard, Vancouver, ISO, and other styles
18

Alava, Pallavi, and G. Radhika. "Robust and Secure Framework for Mobile Cloud Computing." Asian Journal of Computer Science and Technology 8, S3 (2019): 1–6. http://dx.doi.org/10.51983/ajcst-2019.8.s3.2115.

Full text
Abstract:
Smartphone devices are widely utilized in our daily lives. However, these devices exhibit limitations, like short battery lifetime, limited computation power, small memory size and unpredictable network connectivity. Therefore, various solutions have been projected to mitigate these limitations and extend the battery period of time with the employment of the offloading technique. In this paper, a unique framework is projected to offload intensive computation tasks from the mobile device to the cloud. This framework uses Associate in Nursing improvement model to work out the offloading decision dynamically supported four main parameters, namely, energy consumption, CPU utilization, execution time, and memory usage. Additionally, a new security layer is provided to shield the transferred data within the cloud from any attack. The experimental results showed that the framework will choose an acceptable offloading decision for various forms of mobile application tasks whereas achieving important performance improvement. Moreover, different from previous techniques, the framework will defend application knowledge from any threat.
APA, Harvard, Vancouver, ISO, and other styles
19

Tu, Youpeng, Haiming Chen, Linjie Yan, and Xinyan Zhou. "Task Offloading Based on LSTM Prediction and Deep Reinforcement Learning for Efficient Edge Computing in IoT." Future Internet 14, no. 2 (2022): 30. http://dx.doi.org/10.3390/fi14020030.

Full text
Abstract:
In IoT (Internet of Things) edge computing, task offloading can lead to additional transmission delays and transmission energy consumption. To reduce the cost of resources required for task offloading and improve the utilization of server resources, in this paper, we model the task offloading problem as a joint decision making problem for cost minimization, which integrates the processing latency, processing energy consumption, and the task throw rate of latency-sensitive tasks. The Online Predictive Offloading (OPO) algorithm based on Deep Reinforcement Learning (DRL) and Long Short-Term Memory (LSTM) networks is proposed to solve the above task offloading decision problem. In the training phase of the model, this algorithm predicts the load of the edge server in real-time with the LSTM algorithm, which effectively improves the convergence accuracy and convergence speed of the DRL algorithm in the offloading process. In the testing phase, the LSTM network is used to predict the characteristics of the next task, and then the computational resources are allocated for the task in advance by the DRL decision model, thus further reducing the response delay of the task and enhancing the offloading performance of the system. The experimental evaluation shows that this algorithm can effectively reduce the average latency by 6.25%, the offloading cost by 25.6%, and the task throw rate by 31.7%.
APA, Harvard, Vancouver, ISO, and other styles
20

Dushku, Edlira, Jeppe Hagelskjær Østergaard, and Nicola Dragoni. "Memory Offloading for Remote Attestation of Multi-Service IoT Devices." Sensors 22, no. 12 (2022): 4340. http://dx.doi.org/10.3390/s22124340.

Full text
Abstract:
Remote attestation (RA) is an effective malware detection mechanism that allows a trusted entity (Verifier) to detect a potentially compromised remote device (Prover). The recent research works are proposing advanced Control-Flow Attestation (CFA) protocols that are able to trace the Prover’s execution flow to detect runtime attacks. Nevertheless, several memory regions remain unattested, leaving the Prover vulnerable to data memory and mobile adversaries. Multi-service devices, whose integrity is also dependent on the integrity of any attached external peripheral devices, are particularly vulnerable to such attacks. This paper extends the state-of-the-art RA schemes by presenting ERAMO, a protocol that attests larger memory regions by adopting the memory offloading approach. We validate and evaluate ERAMO with a hardware proof-of-concept implementation using a TrustZone-capable LPC55S69 running two sensor nodes. We enhance the protocol by providing extensive memory analysis insights for multi-service devices, demonstrating that it is possible to analyze and attest the memory of the attached peripherals. Experiments confirm the feasibility and effectiveness of ERAMO in attesting dynamic memory regions.
APA, Harvard, Vancouver, ISO, and other styles
21

Dushku, Edlira, Jeppe Hagelskjær Østergaard, and Nicola Dragoni. "Memory Offloading for Remote Attestation of Multi-Service IoT Devices." Sensors 2022 Volume 22, Issue 12: Selected Papers from the IEEE International Conference of Cyber Security and Resilience 2021 (2022): 73–80. https://doi.org/10.3390/s22124340.

Full text
Abstract:
Remote attestation (RA) is an effective malware detection mechanism that allows a trusted entity (Verifier) to detect a potentially compromised remote device (Prover). The recent research works are proposing advanced Control-Flow Attestation (CFA) protocols that are able to trace the Prover’s execution flow to detect runtime attacks. Nevertheless, several memory regions remain unattested, leaving the Prover vulnerable to data memory and mobile adversaries. Multi-service devices, whose integrity is also dependent on the integrity of any attached external peripheral devices, are particularly vulnerable to such attacks. This paper extends the state-of-the-art RA schemes by presenting ERAMO, a protocol that attests larger memory regions by adopting the memory offloading approach. We validate and evaluate ERAMO with a hardware proof-of-concept implementation using a TrustZone-capable LPC55S69 running two sensor nodes. We enhance the protocol by providing extensive memory analysis insights for multi-service devices, demonstrating that it is possible to analyze and attest the memory of the attached peripherals. Experiments confirm the feasibility and effectiveness of ERAMO in attesting dynamic memory regions.
APA, Harvard, Vancouver, ISO, and other styles
22

Richmond, Lauren L., Lois K. Burnett, Julia Kearley, Sam J. Gilbert, Alexandra B. Morrison, and B. Hunter Ball. "Individual differences in prospective and retrospective memory offloading." Journal of Memory and Language 142 (April 2025): 104617. https://doi.org/10.1016/j.jml.2025.104617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wu, Jian, Min Jia, Liang Zhang, and Qing Guo. "DNNs Based Computation Offloading for LEO Satellite Edge Computing." Electronics 11, no. 24 (2022): 4108. http://dx.doi.org/10.3390/electronics11244108.

Full text
Abstract:
Huge low earth orbit (LEO) satellite networks can achieve global coverage with low latency. In addition, mobile edge computing (MEC) servers can be mounted on LEO satellites to provide computing offloading services for users in remote areas. A multi-user multi-task system model is modeled and the problem of user’s offloading decisions and bandwidth allocation is formulated as a mixed integer programming problem to minimize the system utility function expressed as the weighted sum of the system energy consumption and delay. However, it cannot be effectively solved by general optimizations. Thus, a deep learning-based offloading algorithm for LEO satellite edge computing networks is proposed to generate offloading decisions through multiple parallel deep neural networks (DNNs) and store the newly generated optimal offloading decisions in memory to improve all DNNs to obtain near-optimal offloading decisions. Moreover, the optimal bandwidth allocation scheme of the system is theoretically derived for the user’s bandwidth allocation problem. The simulation results show that the proposed algorithm can achieve a good convergence effect within a small number of training steps, and obtain the optimal system utility function values compared with the comparative algorithms under different system parameters, and the time cost of the system and DNNs is very satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
24

Ahuja, Sanjay P., and Inan Kaddour. "Mobile Cloud Computing." International Journal of Cloud Applications and Computing 15, no. 1 (2025): 1–35. https://doi.org/10.4018/ijcac.378695.

Full text
Abstract:
Currently, smart mobile devices are used for more than just calling and texting. They can run complex applications such as GPS, antivirus, and photo editor applications. Smart devices today offer mobility, flexibility, and portability, but they have limited resources and a relatively weak battery. As companies began creating mobile resource-hungry and power-hungry applications, they have realized that cloud computing was one of the solutions that they could utilize to overcome smart device constraints. Cloud computing helps decrease memory usage and improve battery life. Mobile cloud computing is the current and expanding research area focusing on methods that allow smart mobile devices to take full advantage of cloud computing. Code offloading is one of the techniques that is employed in cloud computing with mobile devices. This research compares two dynamic offloading frameworks to determine which one is better in terms of execution time and battery life improvement. While executing light tasks Cuckoo does better with local execution while Aiolos outperforms Cuckoo when offloading a light computation task to the cloud. Similarly, Aiolos performs better than Cuckoo when offloading a heavy computation task to an EC2 instance. Regarding battery consumption, offloading using either framework saves 23% more power than the local environment. Aiolos consumes less battery power than Cuckoo when offloading a heavy computation task.
APA, Harvard, Vancouver, ISO, and other styles
25

Cheng, Xiaoliang, Jingchun Liu, and Zhigang Jin. "Efficient Deep Learning Approach for Computational Offloading in Mobile Edge Computing Networks." Wireless Communications and Mobile Computing 2022 (February 22, 2022): 1–12. http://dx.doi.org/10.1155/2022/2976141.

Full text
Abstract:
The fifth-generation mobile communication technology is broadly characterised by extremely high data rate, low latency, massive network capacity, and ultrahigh reliability. However, owing to the explosive increase in mobile devices and data, it faces challenges, such as data traffic, high energy consumption, and communication delays. In this study, multiaccess edge computing (previously known as mobile edge computing) is investigated to reduce energy consumption and delay. The mathematical model of multidimensional variable programming is established by combining the offloading scheme and bandwidth allocation to ensure that the computing task of wireless devices (WDs) can be reasonably offloaded to an edge server. However, traditional analysis tools are limited by computational dimensions, which make it difficult to solve the problem efficiently, especially for large-scale WDs. In this study, a novel offloading algorithm known as energy-efficient deep learning-based offloading is proposed. The proposed algorithm uses a new type of deep learning model: multiple-parallel deep neural network. The generated offloading schemes are stored in shared memory, and the optimal scheme is generated by continuous training. Experiments show that the proposed algorithm can generate near-optimal offloading schemes efficiently and accurately.
APA, Harvard, Vancouver, ISO, and other styles
26

Jang, Jihye, Khikmatullo Tulkinbekov, and Deok-Hwan Kim. "Task Offloading of Deep Learning Services for Autonomous Driving in Mobile Edge Computing." Electronics 12, no. 15 (2023): 3223. http://dx.doi.org/10.3390/electronics12153223.

Full text
Abstract:
As the utilization of complex and heavy applications increases in autonomous driving, research on using mobile edge computing and task offloading for autonomous driving is being actively conducted. Recently, researchers have been studying task offloading algorithms using artificial intelligence, such as reinforcement learning or partial offloading. However, these methods require a lot of training data and critical deadlines and are weakly adaptive to complex and dynamically changing environments. To overcome this weakness, in this paper, we propose a novel task offloading algorithm based on Lyapunov optimization to maintain the system stability and minimize task processing delay. First, a real-time monitoring system is built to utilize distributed computing resources in an autonomous driving environment efficiently. Second, the computational complexity and memory access rate are analyzed to reflect the characteristics of the deep learning applications to the task offloading algorithm. Third, Lyapunov and Lagrange optimization solves the trade-off issues between system stability and user requirements. The experimental results show that the system queue backlog remains stable, and the tasks are completed within an average of 0.4231 s, 0.7095 s, and 0.9017 s for object detection, driver profiling, and image recognition, respectively. Therefore, we ensure that the proposed task offloading algorithm enables the deep learning application to be processed within the deadline and keeps the system stable.
APA, Harvard, Vancouver, ISO, and other styles
27

R., Aishwarya, and Mathivanan G. "Improved salp swarm algorithm based optimization of mobile task offloading." PeerJ Computer Science 11 (May 7, 2025): e2818. https://doi.org/10.7717/peerj-cs.2818.

Full text
Abstract:
Background The realization of computation-intensive applications such as real-time video processing, virtual/augmented reality, and face recognition becomes possible for mobile devices with the latest advances in communication technologies. This application requires complex computation for better user experience and real-time decision-making. However, the Internet of Things (IoT) and mobile devices have computational power and limited energy. Executing these computational-intensive tasks on edge devices may result in high energy consumption or high computation latency. In recent times, mobile edge computing (MEC) has been used and modernized for offloading this complex task. In MEC, IoT devices transmit their tasks to edge servers, which consecutively carry out faster computation. Methods However, several IoT devices and edge servers put an upper limit on executing concurrent tasks. Furthermore, implementing a smaller size task (1 KB) over an edge server leads to improved energy consumption. Thus, there is a need to have an optimum range for task offloading so that the energy consumption and response time will be minimal. The evolutionary algorithm is the best for resolving the multiobjective task. Energy, memory, and delay reduction together with the detection of the offloading task is the multiobjective to achieve. Therefore, this study presents an improved salp swarm algorithm-based Mobile Application Offloading Algorithm (ISSA-MAOA) technique for MEC. Results This technique harnesses the optimization capabilities of the improved salp swarm algorithm (ISSA) to intelligently allocate computing tasks between mobile devices and the cloud, aiming to concurrently minimize energy consumption, and memory usage, and reduce task completion delays. Through the proposed ISSA-MAOA, the study endeavors to contribute to the enhancement of mobile cloud computing (MCC) frameworks, providing a more efficient and sustainable solution for offloading tasks in mobile applications. The results of this research contribute to better resource management, improved user interactions, and enhanced efficiency in MCC environments.
APA, Harvard, Vancouver, ISO, and other styles
28

Zha, Zhiyong, Yifei Yang, Yongjun Xia, et al. "Energy-Efficient Joint Partitioning and Offloading for Delay-Sensitive CNN Inference in Edge Computing." Applied Sciences 14, no. 19 (2024): 8656. http://dx.doi.org/10.3390/app14198656.

Full text
Abstract:
With the development of deep learning foundation model technology, the types of computing tasks have become more complex, and the computing resources and memory required for these tasks have also become more substantial. Since it has long been revealed that task offloading in cloud servers has many drawbacks, such as high communication delay and low security, task offloading is mostly carried out in the edge servers of the Internet of Things (IoT) network. However, edge servers in IoT networks are characterized by tight resource constraints and often the dynamic nature of data sources. Therefore, the question of how to perform task offloading of deep learning foundation model services on edge servers has become a new research topic. However, the existing task offloading methods either can not meet the requirements of massive CNN architecture or require a lot of communication overhead, leading to significant delays and energy consumption. In this paper, we propose a parallel partitioning method based on matrix convolution to partition foundation model inference tasks, which partitions large CNN inference tasks into subtasks that can be executed in parallel to meet the constraints of edge devices with limited hardware resources. Then, we model and mathematically express the problem of task offloading. In a multi-edge-server, multi-user, and multi-task edge-end system, we propose a task-offloading method that balances the tradeoff between delay and energy consumption. It adopts a greedy algorithm to optimize task-offloading decisions and terminal device transmission power to maximize the benefits of task offloading. Finally, extensive experiments verify the significant and extensive effectiveness of our algorithm.
APA, Harvard, Vancouver, ISO, and other styles
29

T., Rajendran, Karthikeyan B., Suresh N., Sai Sravanthi G., Vijay K., and Lakshmi Tejaswini P. "Accomplishing memory leakage flexibility framework utilizing physically unclonable capabilities and Fuzzy Extractor." International Journal of Computational Intelligence in Control 12, no. 2 (2020): 268–74. https://doi.org/10.5281/zenodo.7485693.

Full text
Abstract:
Offloading private information to shared cloud storage and letting end users perceive that data as the best choice for a wide range of users, giving people more flexibility and lower prices. For security reasons, private data should be encrypted before offloading, making traditional keyword search tactics impractical. As such, searchable.Cryptography has been extensively researched in recent years. For practicality, multi-keyword ranked searches on encrypted data are important. However, almost all existing multiple keyword ordering search schemes suffer from the security threat of non-volatile memory leak attacks. To solve this kind of problem, a secure multi-keyword gradient search scheme is proposed that resists memory leak attacks. The executed plan utilizes genuinely irreproducible capabilities (PUFs) to change watchwords and report identifiers. Due to the noisy nature of PUF, we use a Fuzzy Extractor (FE) to recover the key of the key
APA, Harvard, Vancouver, ISO, and other styles
30

Ball, Hunter, Phil Peper, Durna Alakbarova, Gene Brewer, and Sam J. Gilbert. "Individual differences in working memory capacity predict benefits to memory from intention offloading." Memory 30, no. 2 (2021): 77–91. http://dx.doi.org/10.1080/09658211.2021.1991380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Sheng, Jinfang, Jie Hu, Xiaoyu Teng, Bin Wang, and Xiaoxia Pan. "Computation Offloading Strategy in Mobile Edge Computing." Information 10, no. 6 (2019): 191. http://dx.doi.org/10.3390/info10060191.

Full text
Abstract:
Mobile phone applications have been rapidly growing and emerging with the Internet of Things (IoT) applications in augmented reality, virtual reality, and ultra-clear video due to the development of mobile Internet services in the last three decades. These applications demand intensive computing to support data analysis, real-time video processing, and decision-making for optimizing the user experience. Mobile smart devices play a significant role in our daily life, and such an upward trend is continuous. Nevertheless, these devices suffer from limited resources such as CPU, memory, and energy. Computation offloading is a promising technique that can promote the lifetime and performance of smart devices by offloading local computation tasks to edge servers. In light of this situation, the strategy of computation offloading has been adopted to solve this problem. In this paper, we propose a computation offloading strategy under a scenario of multi-user and multi-mobile edge servers that considers the performance of intelligent devices and server resources. The strategy contains three main stages. In the offloading decision-making stage, the basis of offloading decision-making is put forward by considering the factors of computing task size, computing requirement, computing capacity of server, and network bandwidth. In the server selection stage, the candidate servers are evaluated comprehensively by multi-objective decision-making, and the appropriate servers are selected for the computation offloading. In the task scheduling stage, a task scheduling model based on the improved auction algorithm has been proposed by considering the time requirement of the computing tasks and the computing performance of the mobile edge computing server. Extensive simulations have demonstrated that the proposed computation offloading strategy could effectively reduce service delay and the energy consumption of intelligent devices, and improve user experience.
APA, Harvard, Vancouver, ISO, and other styles
32

Meena, V., Obulaporam Gireesha, Kannan Krithivasan, and V. S. Shankar Sriram. "Fuzzy simplified swarm optimization for multisite computational offloading in mobile cloud computing." Journal of Intelligent & Fuzzy Systems 39, no. 6 (2020): 8285–97. http://dx.doi.org/10.3233/jifs-189148.

Full text
Abstract:
Mobile Cloud Computing (MCC)’s rapid technological advancements facilitate various computational-intensive applications on smart mobile devices. However, such applications are constrained by limited processing power, energy consumption, and storage capacity of smart mobile devices. To mitigate these issues, computational offloading is found to be the one of the promising techniques as it offloads the execution of computation-intensive applications to cloud resources. In addition, various kinds of cloud services and resourceful servers are available to offload computationally intensive tasks. However, their processing speeds, access delays, computation capability, residual memory and service charges are different which retards their usage, as it becomes time-consuming and ambiguous for making decisions. To address the aforementioned issues, this paper presents a Fuzzy Simplified Swarm Optimization based cloud Computational Offloading (FSSOCO) algorithm to achieve optimum multisite offloading. Fuzzy logic and simplified swarm optimization are employed for the identification of high powerful nodes and task decomposition respectively. The overall performance of FSSOCO is validated using the Specjvm benchmark suite and compared with the state-of-the-art offloading techniques in terms of the weighted total cost, energy consumption, and processing time.
APA, Harvard, Vancouver, ISO, and other styles
33

Bai, Wenle, and Ying Wang. "Jointly Optimize Partial Computation Offloading and Resource Allocation in Cloud-Fog Cooperative Networks." Electronics 12, no. 15 (2023): 3224. http://dx.doi.org/10.3390/electronics12153224.

Full text
Abstract:
Fog computing has become a hot topic in recent years as it provides cloud computing resources to the network edge in a distributed manner that can respond quickly to intensive tasks from different user equipment (UE) applications. However, since fog resources are also limited, considering the number of Internet of Things (IoT) applications and the demand for traffic, designing an effective offload strategy and resource allocation scheme to reduce the offloading cost of UE systems is still an important challenge. To this end, this paper investigates the problem of partial offloading and resource allocation under a cloud-fog coordination network architecture, which is formulated as a mixed integer nonlinear programming (MINLP). Bring in a new weighting metric-cloud resource rental cost. The optimization function of offloading cost is defined as a weighted sum of latency, energy consumption, and cloud rental cost. Under the fixed offloading decision condition, two sub-problems of fog computing resource allocation and user transmission power allocation are proposed and solved using convex optimization techniques and Karush-Kuhn-Tucker (KKT) conditions, respectively. The sampling process of the inner loop of the simulated annealing (SA) algorithm is improved, and a memory function is added to obtain the novel simulated annealing (N-SA) algorithm used to solve the optimal value offloading problem corresponding to the optimal resource allocation problem. Through extensive simulation experiments, it is shown that the N-SA algorithm obtains the optimal solution quickly and saves 17% of the system cost compared to the greedy offloading and joint resource allocation (GO-JRA) algorithm.
APA, Harvard, Vancouver, ISO, and other styles
34

Jiang, Guiwen, Rongxi Huang, Zhiming Bao, and Gaocai Wang. "A Task Offloading and Resource Allocation Strategy Based on Multi-Agent Reinforcement Learning in Mobile Edge Computing." Future Internet 16, no. 9 (2024): 333. http://dx.doi.org/10.3390/fi16090333.

Full text
Abstract:
Task offloading and resource allocation is a research hotspot in cloud-edge collaborative computing. Many existing pieces of research adopted single-agent reinforcement learning to solve this problem, which has some defects such as low robustness, large decision space, and ignoring delayed rewards. In view of the above deficiencies, this paper constructs a cloud-edge collaborative computing model, and related task queue, delay, and energy consumption model, and gives joint optimization problem modeling for task offloading and resource allocation with multiple constraints. Then, in order to solve the joint optimization problem, this paper designs a decentralized offloading and scheduling scheme based on “task-oriented” multi-agent reinforcement learning. In this scheme, we present information synchronization protocols and offloading scheduling rules and use edge servers as agents to construct a multi-agent system based on the Actor–Critic framework. In order to solve delayed rewards, this paper models the offloading and scheduling problem as a “task-oriented” Markov decision process. This process abandons the commonly used equidistant time slot model but uses dynamic and parallel slots in the step of task processing time. Finally, an offloading decision algorithm TOMAC-PPO is proposed. The algorithm applies the proximal policy optimization to the multi-agent system and combines the Transformer neural network model to realize the memory and prediction of network state information. Experimental results show that this algorithm has better convergence speed and can effectively reduce the service cost, energy consumption, and task drop rate under high load and high failure rates. For example, the proposed TOMAC-PPO can reduce the average cost by from 19.4% to 66.6% compared to other offloading schemes under the same network load. In addition, the drop rate of some baseline algorithms with 50 users can achieve 62.5% for critical tasks, while the proposed TOMAC-PPO only has 5.5%.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Chenlei, and Zhixin Sun. "A Multi-Agent Reinforcement Learning-Based Task-Offloading Strategy in a Blockchain-Enabled Edge Computing Network." Mathematics 12, no. 14 (2024): 2264. http://dx.doi.org/10.3390/math12142264.

Full text
Abstract:
In recent years, many mobile edge computing network solutions have enhanced data privacy and security and built a trusted network mechanism by introducing blockchain technology. However, this also complicates the task-offloading problem of blockchain-enabled mobile edge computing, and traditional evolutionary learning and single-agent reinforcement learning algorithms are difficult to solve effectively. In this paper, we propose a blockchain-enabled mobile edge computing task-offloading strategy based on multi-agent reinforcement learning. First, we innovatively propose a blockchain-enabled mobile edge computing task-offloading model by comprehensively considering optimization objectives such as task execution energy consumption, processing delay, user privacy metrics, and blockchain incentive rewards. Then, we propose a deep reinforcement learning algorithm based on multiple agents sharing a global memory pool using the actor–critic architecture, which enables each agent to acquire the experience of another agent during the training process to enhance the collaborative capability among agents and overall performance. In addition, we adopt attenuatable Gaussian noise into the action space selection process in the actor network to avoid falling into the local optimum. Finally, experiments show that this scheme’s comprehensive cost calculation performance is enhanced by more than 10% compared with other multi-agent reinforcement learning algorithms. In addition, Gaussian random noise-based action space selection and a global memory pool improve the performance by 38.36% and 43.59%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
36

Lin, Jiaxin, Tao Ji, Xiangpeng Hao, et al. "Towards Accelerating Data Intensive Application's Shuffle Process Using SmartNICs." ACM SIGMETRICS Performance Evaluation Review 51, no. 1 (2023): 9–10. http://dx.doi.org/10.1145/3606376.3593577.

Full text
Abstract:
Emerging SmartNIC creates new opportunities to offload application-level computation into the networking layer. Shuffle, the all-to-all data exchange process, is a critical building block for network communication in distributed data-intensive applications and can potentially benefit from SmartNICs. In this paper, we develop SmartShuffle, which accelerates the data-intensive application's shuffle process by offloading computation tasks into the SmartNIC devices. SmartShuffle supports offloading both low-level network functions, including data partitioning and network transport, and high-level computation tasks, including filtering, aggregation, and sorting. SmartShuffle adopts a coordinated offload architecture to make sender-side and receiver-side SmartNICs jointly contribute to the computation offload. SmartShuffle manages the computation and memory constraints on the device using liquid offloading, which dynamically migrates computations between the host CPU and the SmartNIC at runtime. We prototype SmartShuffle on the Stingray SoC SmartNICs and plug it into Spark. Our evaluation shows that SmartShuffle outperforms Spark, and Spark RDMA by up to 40% on TPC-H.
APA, Harvard, Vancouver, ISO, and other styles
37

Xiao, Mengbai, Hao Wang, Liang Geng, Rubao Lee, and Xiaodong Zhang. "An RDMA-enabled In-memory Computing Platform for R-tree on Clusters." ACM Transactions on Spatial Algorithms and Systems 8, no. 2 (2022): 1–26. http://dx.doi.org/10.1145/3503513.

Full text
Abstract:
R-tree is a foundational data structure used in spatial databases and scientific databases. With the advancement of networks and computer architectures, in-memory data processing for R-tree in distributed systems has become a common platform. We have observed new performance challenges to process R-tree as the amount of multidimensional datasets become increasingly high. Specifically, an R-tree server can be heavily overloaded while the network and client CPU are lightly loaded, and vice versa. In this article, we present the design and implementation of Catfish, an RDMA-enabled R-tree for low latency and high throughput by adaptively utilizing the available network bandwidth and computing resources to balance the workloads between clients and servers. We design and implement two basic mechanisms of using RDMA for a client-server R-tree data processing system. First, in the fast messaging design, we use RDMA writes to send R-tree requests to the server and let server threads process R-tree requests to achieve low query latency. Second, in the RDMA offloading design, we use RDMA reads to offload tree traversal from the server to the client, which rescues the server as it is overloaded. We further develop an adaptive scheme to effectively switch an R-tree search between fast messaging and RDMA offloading, maximizing the overall performance. Our experiments show that the adaptive solution of Catfish on InfiniBand significantly outperforms R-tree that uses only fast messaging or only RDMA offloading in both latency and throughput. Catfish can also deliver up to one order of magnitude performance over the traditional schemes using TCP/IP on 1 and 40 Gbps Ethernet. We make a strong case to use RDMA to effectively balance workloads in distributed systems for low latency and high throughput.
APA, Harvard, Vancouver, ISO, and other styles
38

Shirayanagi, Hirotoshi, Ami Nozoe, and Shinya Kurauchi. "Suppression and facilitation of streetscape memory caused by cognitive offloading." Journal of the City Planning Institute of Japan 59, no. 3 (2024): 1044–51. http://dx.doi.org/10.11361/journalcpij.59.1044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Siarkowski, Krzysztof, Przemysław Sprawka, and Małgorzata Plechawska-Wójcik. "Methods for optimizing the performance of Unity 3D game engine based on third-person perspective game." Journal of Computer Sciences Institute 3 (March 30, 2017): 46–53. http://dx.doi.org/10.35784/jcsi.592.

Full text
Abstract:
Game optimization is one of the most important aspects of their creation. The article describes methods to optimize Unity Engine using third person perspective game as an example. Various methods that rely on offloading graphics card, by increasing the use of CPU and memory were used in order to check how the game performance changes.
APA, Harvard, Vancouver, ISO, and other styles
40

Yang, Yichi, Ruibin Yan, and Yijun Gu. "LTransformer: A Transformer-Based Framework for Task Offloading in Vehicular Edge Computing." Applied Sciences 13, no. 18 (2023): 10232. http://dx.doi.org/10.3390/app131810232.

Full text
Abstract:
Vehicular edge computing (VEC) is essential in vehicle applications such as traffic control and in-vehicle services. In the task offloading process of VEC, predictive-mode transmission based on deep learning is constrained by limited computational resources. Furthermore, the accuracy of deep learning algorithms in VEC is compromised due to the lack of edge computing features in algorithms. To solve these problems, this paper proposes a task offloading optimization approach that enables edge servers to store deep learning models. Moreover, this paper proposes the LTransformer, a transformer-based framework that incorporates edge computing features. The framework consists of pre-training, an input module, an encoding–decoding module, and an output module. Compared with four sequential deep learning methods, namely a Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), a Gated Recurrent Unit (GRU), and the Transformer, the LTransformer achieves the highest accuracy, reaching 80.1% on the real dataset. In addition, the LTransformer achieves 0.008 s when predicting a single trajectory, fully satisfying the fundamental requirements of real-time prediction and enabling task offloading optimization.
APA, Harvard, Vancouver, ISO, and other styles
41

Dai, Yu, Jiaming Fu, Zhen Gao, and Lei Yang. "Research on Joint Optimization of Task Offloading and UAV Trajectory in Mobile Edge Computing Considering Communication Cost Based on Safe Reinforcement Learning." Applied Sciences 14, no. 6 (2024): 2635. http://dx.doi.org/10.3390/app14062635.

Full text
Abstract:
Due to CPU and memory limitations, mobile IoT devices face challenges in handling delay-sensitive and computationally intensive tasks. Mobile edge computing addresses this issue by offloading tasks to the wireless network edge, reducing latency and energy consumption. UAVs serve as auxiliary edge clouds, providing flexible deployment and reliable wireless communication. To minimize latency and energy consumption, considering the limited resources and computing capabilities of UAVs, a multi-UAV and multi-edge cloud system was deployed for task offloading and UAV trajectory optimization. A joint optimization model for computing task offloading and UAV trajectory was proposed. During model training, a UAV communication mechanism was introduced to address potential coverage issues for mobile user devices through multiple UAVs or complete coverage. Considering the fact that decisions made by UAVs during trajectory planning may lead to collisions, a MADDPG algorithm with an integrated safety layer was adopted to obtain the safest actions closest to the joint UAV actions under safety constraints, thereby avoiding collisions between UAVs. Numerical simulation results demonstrate that the optimization method based on safety reinforcement learning considering communication cost outperforms other optimization methods. Communication between UAVs effectively addresses the issue of redundant or incomplete coverage for mobile user devices, reducing computation latency and energy consumption for task offloading. Additionally, the introduction of safety reinforcement learning effectively avoids collisions between UAVs.
APA, Harvard, Vancouver, ISO, and other styles
42

Norton, Anderson, Catherine Ulrich, and Sarah Kerrigan. "Unit Transformation Graphs: Modeling Students’ Mathematics in Meeting the Cognitive Demands of Fractions Multiplication Tasks." Journal for Research in Mathematics Education 54, no. 4 (2023): 240–59. http://dx.doi.org/10.5951/jresematheduc-2021-0031.

Full text
Abstract:
This article introduces unit transformation graphs (UTGs) as a tool for diagramming the ways students use sequences of mental actions to solve mathematical tasks. We report findings from a study in which we identified patterns in the ways preservice elementary school teachers relied on working memory to coordinate mental actions when operating in fraction multiplication settings. UTGs account for the constraint of working memory in sequencing mental actions to solve mathematical tasks. They also explain the power of units coordinating structures in offloading demands on working memory. At the end of the article, we consider some of the research implications for these findings—specifically, ways that UTGs can lend explanatory and illustrative power to analyses of students’ mathematics.
APA, Harvard, Vancouver, ISO, and other styles
43

Chiosa, Monica, Fabio Maschi, Ingo Müller, Gustavo Alonso, and Norman May. "Hardware acceleration of compression and encryption in SAP HANA." Proceedings of the VLDB Endowment 15, no. 12 (2022): 3277–91. http://dx.doi.org/10.14778/3554821.3554822.

Full text
Abstract:
With the advent of cloud computing, where computational resources are expensive and data movement needs to be secured and minimized, database management systems need to reconsider their architecture to accommodate such requirements. In this paper, we present our analysis, design and evaluation of an FPGA-based hardware accelerator for offloading compression and encryption for SAP HANA, SAP's Software-as-a-Service (SaaS) in-memory database. Firstly, we identify expensive data-transformation operations in the I/O path. Then we present the design details of a system consisting of compression followed by different types of encryption to accommodate different security levels, and identify which combinations maximize performance. We also analyze the performance benefits of offloading decryption to the FPGA followed by decompression on the CPU. The experimental evaluation using SAP HANA traces shows that analytical engines can benefit from FPGA hardware offloading. The results identify a number of important trade-offs (e.g., the system can accommodate low-latency secured transactions to high-performance use cases or offer lower storage cost by also compressing payloads for less critical use cases), and provide valuable information to researchers and practitioners exploring the nascent space of hardware accelerators for database engines.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Hongxia, Shiyu Xi, Hongzhao Jiang, Qi Shen, Bodong Shang, and Jian Wang. "Resource Allocation and Offloading Strategy for UAV-Assisted LEO Satellite Edge Computing." Drones 7, no. 6 (2023): 383. http://dx.doi.org/10.3390/drones7060383.

Full text
Abstract:
In emergency situations, such as earthquakes, landslides and other natural disasters, the terrestrial communications infrastructure is severely disrupted and unable to provide services to terrestrial IoT devices. However, tasks in emergency scenarios often require high levels of computing power and energy supply that cannot be processed quickly enough by devices locally and require computational offloading. In addition, offloading tasks to server-equipped edge base stations may not always be feasible due to the lack of infrastructure or distance. Since Low Orbit Satellites (LEO) have abundant computing resources, and Unmanned Aerial Vehicles (UAVs) have flexible deployment, offloading tasks to LEO satellite edge servers via UAVs becomes straightforward, which provides computing services to ground-based devices. Therefore, this paper investigates the computational tasks and resource allocation in a UAV-assisted multi-layer LEO satellite network, taking into account satellite computing resources and device task volumes. In order to minimise the weighted sum of energy consumption and delay in the system, the problem is formulated as a constrained optimisation problem, which is then transformed into a Markov Decision Problem (MDP). We propose a UAV-assisted airspace integration network architecture, and a Deep Deterministic Policy Gradient and Long short-term memory (DDPG-LSTM)-based task offloading and resource allocation algorithm to solve the problem. Simulation results demonstrate that the solution outperforms the baseline approach and that our framework and algorithm have the potential to provide reliable communication services in emergency situations.
APA, Harvard, Vancouver, ISO, and other styles
45

Lin, Jiaxin, Tao Ji, Xiangpeng Hao, et al. "Towards Accelerating Data Intensive Application's Shuffle Process Using SmartNICs." Proceedings of the ACM on Measurement and Analysis of Computing Systems 7, no. 2 (2023): 1–23. http://dx.doi.org/10.1145/3589980.

Full text
Abstract:
The wide adoption of the emerging SmartNIC technology creates new opportunities to offload application-level computation into the networking layer, which frees the burden of host CPUs, leading to performance improvement. Shuffle, the all-to-all data exchange process, is a critical building block for network communication in distributed data-intensive applications and can potentially benefit from SmartNICs. In this paper, we develop SmartShuffle, which accelerates the data-intensive application's shuffle process by offloading various computation tasks into the SmartNIC devices. SmartShuffle supports offloading both low-level network functions, including data partitioning and network transport, and high-level computation tasks, including filtering, aggregation, and sorting. SmartShuffle adopts a coordinated offload architecture to make sender-side and receiver-side SmartNICs jointly contribute to the benefits of shuffle computation offload. SmartShuffle carefully manages the tight and time-varying computation and memory constraints on the device. We propose a liquid offloading approach, which dynamically migrates operators between the host CPU and the SmartNIC at runtime such that resources in both devices are fully utilized. We prototype SmartShuffle on the Stingray SoC SmartNICs and plug it into Spark. Our evaluation shows that SmartShuffle improves host CPU efficiency and I/O efficiency with lower job completion time. SmartShuffle outperforms Spark, and Spark RDMA by up to 40% on TPC-H.
APA, Harvard, Vancouver, ISO, and other styles
46

Tunga, Harinandan, Samarjit Kar, and Debasis Giri. "Intrinsic Profit Maximization of the Offloading Tasks for Mobile Edge Computing with Fixed Memory Capacities and Low Latency Constraints Using Ant Colony Optimization." Mathematical Modelling of Engineering Problems 9, no. 3 (2022): 668–74. http://dx.doi.org/10.18280/mmep.090313.

Full text
Abstract:
Artificial intelligence and the Internet of Things (IoT) have resulted in more computationally demanding and time-sensitive applications. Given the limited processing power of current mobile computers, there is a need for on-demand computing resources with minimal latency. Edge computing has already made a significant contribution to mobile networks, enabling the distribution, scaling, and faster access of computational resources at network margins closer to users, especially in power-constrained mobile devices. Offloading tasks efficiently on the Mobile Edge Computing Server (MECS) is an important part of our proposed method. We propose a method of offloading multiple tasks for Mobile Edge Computing servers that require fixed memory capacities and low latency. We calculate the optimum cumulative intrinsic profit of the number of offloaded tasks efficiently using the Ant Colony Optimization (ACO) model, which is flexible and versatile in the context of real-time applications.
APA, Harvard, Vancouver, ISO, and other styles
47

Hascoet, Tristan, Weihao Zhuang, Quenti Febvre, Yasuo Ariki, and Tetsuya Takiguchi. "Reducing the Memory Cost of Training Convolutional Neural Networks by CPU Offloading." Journal of Software Engineering and Applications 12, no. 08 (2019): 307–20. http://dx.doi.org/10.4236/jsea.2019.128019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Kim, Byoung-Hak, and Chae Eun Rhee. "Making Better Use of Processing-in-Memory Through Potential-Based Task Offloading." IEEE Access 8 (2020): 61631–41. http://dx.doi.org/10.1109/access.2020.2983432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Piramu Preethika, SK, and R. Gobinath. "A survey of mobile cloud computing offloading concepts based on measurements and algorithms." International Journal of Engineering & Technology 7, no. 2.26 (2018): 13. http://dx.doi.org/10.14419/ijet.v7i2.26.12525.

Full text
Abstract:
Handheld computing devices at the moment everywhere in the arms of generation dependent human beings of every age. Clients are looking for new innovative thoughts to first and best utilization of Smart Mobile Devices. The rapid enlargement of the number of clever telephone users has given rise of various research subjects. While the technological advancements are happening from hardware and package levels, novel ideas are being introduced. One amongst those ideas is Mobile Cloud computing which is the mix of cloud computing, mobile compu-ting and wireless networks to bring wealthy process resources to mobile users, network operators, yet as cloud computing suppliers. MCC offloading is a technique to overcome the barriers like battery life, memory usage, computation difficulties faced in Mobile computing with the help of cloud. This survey not only focuses on the concepts of offloading, it also concentrates on the measurements and basic ideas of Mobile Cloud computing related algorithms.
APA, Harvard, Vancouver, ISO, and other styles
50

Akyürek, Elkan G., Nils Kappelmann, Marc Volkert, and Hedderik van Rijn. "What You See Is What You Remember: Visual Chunking by Temporal Integration Enhances Working Memory." Journal of Cognitive Neuroscience 29, no. 12 (2017): 2025–36. http://dx.doi.org/10.1162/jocn_a_01175.

Full text
Abstract:
Human memory benefits from information clustering, which can be accomplished by chunking. Chunking typically relies on expertise and strategy, and it is unknown whether perceptual clustering over time, through temporal integration, can also enhance working memory. The current study examined the attentional and working memory costs of temporal integration of successive target stimulus pairs embedded in rapid serial visual presentation. ERPs were measured as a function of behavioral reports: One target, two separate targets, or two targets reported as a single integrated target. N2pc amplitude, reflecting attentional processing, depended on the actual number of successive targets. The memory-related CDA and P3 components instead depended on the perceived number of targets irrespective of their actual succession. The report of two separate targets was associated with elevated amplitude, whereas integrated as well as actual single targets exhibited lower amplitude. Temporal integration thus provided an efficient means of processing sensory input, offloading working memory so that the features of two targets were consolidated and maintained at a cost similar to that of a single target.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!