To see the other types of publications on this topic, follow the link: Caching.

Journal articles on the topic 'Caching'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Caching.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Prasad, M., P. R. Sudha Rani, Raja Rao PBV, Pokkuluri Kiran Sree, P. T. Satyanarayana Murty, A. Satya Mallesh, M. Ramesh Babu, and Chintha Venkata Ramana. "Blockchain-Enabled On-Path Caching for Efficient and Reliable Content Delivery in Information-Centric Networks." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (October 27, 2023): 358–63. http://dx.doi.org/10.17762/ijritcc.v11i9.8397.

Full text
Abstract:
As the demand for online content continues to grow, traditional Content Distribution Networks (CDNs) are facing significant challenges in terms of scalability and performance. Information-Centric Networking (ICN) is a promising new approach to content delivery that aims to address these issues by placing content at the center of the network architecture. One of the key features of ICNs is on-path caching, which allows content to be cached at intermediate routers along the path from the source to the destination. On-path caching in ICNs still faces some challenges, such as the scalability of the cache and the management of cache consistency. To address these challenges, this paper proposes several alternative caching schemes that can be integrated into ICNs using blockchain technology. These schemes include Bloom filters, content-based routing, and hybrid caching, which combine the advantages of off-path and on-path cachings. The proposed blockchain-enabled on-path caching mechanism ensures the integrity and authenticity of cached content, and smart contracts automate the caching process and incentivize caching nodes. To evaluate the performance of these caching alternatives, the authors conduct experiments using real-world datasets. The results show that on-path caching can significantly reduce network congestion and improve content delivery efficiency. The Bloom filter caching scheme achieved a cache hit rate of over 90% while reducing the cache size by up to 80% compared to traditional caching. The content-based routing scheme also achieved high cache hit rates while maintaining low latency.
APA, Harvard, Vancouver, ISO, and other styles
2

Shuai, Ziqi, Zhenbang Chen, Kelin Ma, Kunlin Liu, Yufeng Zhang, Jun Sun, and Ji Wang. "Partial Solution Based Constraint Solving Cache in Symbolic Execution." Proceedings of the ACM on Software Engineering 1, FSE (July 12, 2024): 2493–514. http://dx.doi.org/10.1145/3660817.

Full text
Abstract:
Constraint solving is one of the main challenges for symbolic execution. Caching is an effective mechanism to reduce the number of the solver invocations in symbolic execution and is adopted by many mainstream symbolic execution engines. However, caching can not perform well on all programs. How to improve caching’s effectiveness is challenging in general. In this work, we propose a partial solution-based caching method for improving caching’s effectiveness. Our key idea is to utilize the partial solutions inside the constraint solving to generate more cache entries. A partial solution may satisfy other constraints of symbolic execution. Hence, our partial solution-based caching method naturally improves the rate of cache hits. We have implemented our method on two mainstream symbolic executors (KLEE and Symbolic PathFinder) and two SMT solvers (STP and Z3). The results of extensive experiments on real-world benchmarks demonstrate that our method effectively increases the number of the explored paths in symbolic execution. Our caching method achieves 1.07x to 2.3x speedups for exploring the same amount of paths on different benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Mo, Bo Ji, Kun Peng Han, and Hong Sheng Xi. "A Cooperative Hybrid Caching Strategy for P2P Mobile Network." Applied Mechanics and Materials 347-350 (August 2013): 1992–96. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.1992.

Full text
Abstract:
Recently mobile network technologies develop quickly. To meet the increasing demand of wireless users, many multimedia proxies have been deployed over wireless networks. The caching nodes constitute a wireless caching system with an architecture of P2P and provide better service to mobile users. In this paper, we formulate the caching system to optimize the consumption of network bandwidth and guarantee the response time of mobile users. Two strategies: single greedy caching strategy and cooperative hybrid caching strategy are proposed to achieve this goal. Single greedy caching aims to reduce bandwidth consumption from the standpoint of each caching node, while cooperative hybrid caching allows sharing and coordination of multiple nodes, taking both bandwidth consumption and popularity into account. Simulation results show that cooperative hybrid caching outperforms single greedy caching in both bandwidth consumption and delay time.
APA, Harvard, Vancouver, ISO, and other styles
4

Dinh, Ngocthanh, and Younghan Kim. "An Energy Reward-Based Caching Mechanism for Information-Centric Internet of Things." Sensors 22, no. 3 (January 19, 2022): 743. http://dx.doi.org/10.3390/s22030743.

Full text
Abstract:
Existing information-centric networking (ICN) designs for Internet of Things (IoT) mostly make caching decisions based on probability or content popularity. From the energy-efficient perspective, those strategies may not always be energy efficient in resource-constrained IoT because without considering the energy reward of caching decisions, inappropriate routers and content objects may be selected for caching, which may lead to negative energy rewards. In this paper, we analyze the energy consumption of content caching and content retrieval in resource-constrained IoT and calculate caching energy reward as a key metric to measure the energy efficiency of a caching decision. We then propose an efficient cache placement and cache replacement mechanism based on the caching energy reward to improve the energy efficiency of caching decisions. Through analysis and experimental results, we show that the proposed mechanism achieves a significant improvement in terms of energy efficiency, stretch ratio, and cache hit ratio compared to state-of-the-art caching schemes.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Yali, and Jiachao Chen. "Collaborative Caching in Edge Computing via Federated Learning and Deep Reinforcement Learning." Wireless Communications and Mobile Computing 2022 (December 22, 2022): 1–15. http://dx.doi.org/10.1155/2022/7212984.

Full text
Abstract:
By deploying resources in the vicinity of users, edge caching can substantially reduce the latency for users to retrieve content and relieve the pressure on the backbone network. Due to the capacity limitation of caching and the dynamic nature of user requests, how to allocate caching resources reasonably must be considered. Some edge caching studies improve network performance by predicting content popularity and actively caching the most popular content, thereby ignoring the privacy and security issues caused by the need to collect user information at the central unit. To this end, a collaborative caching strategy based on federated learning is proposed. First, federated learning is used to make distributed predictions of the preferences of users in the nodes to develop an effective content caching policy. Then, the problem of allocating caching resources to optimize the cost of video providers is formulated as a Markov decision process, and a reinforcement learning method is used to optimize the caching decisions. Compared with several basic caching strategies in terms of cache hit rate, transmission delay, and cost, the simulation results show that the proposed content caching strategy reduces the cost of video providers, and has higher cache hit rate and lower average transmission delay.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Feng, Kwok-Yan Lam, Li Wang, Zhenyu Na, Xin Liu, and Qing Pan. "Caching Efficiency Enhancement at Wireless Edges with Concerns on User’s Quality of Experience." Wireless Communications and Mobile Computing 2018 (2018): 1–10. http://dx.doi.org/10.1155/2018/1680641.

Full text
Abstract:
Content caching is a promising approach to enhancing bandwidth utilization and minimizing delivery delay for new-generation Internet applications. The design of content caching is based on the principles that popular contents are cached at appropriate network edges in order to reduce transmission delay and avoid backhaul bottleneck. In this paper, we propose a cooperative caching replacement and efficiency optimization scheme for IP-based wireless networks. Wireless edges are designed to establish a one-hop scope of caching information table for caching replacement in cases when there is not enough cache resource available within its own space. During the course, after receiving the caching request, every caching node should determine the weight of the required contents and provide a response according to the availability of its own caching space. Furthermore, to increase the caching efficiency from a practical perspective, we introduce the concept of quality of user experience (QoE) and try to properly allocate the cache resource of the whole networks to better satisfy user demands. Different caching allocation strategies are devised to be adopted to enhance user QoE in various circumstances. Numerical results are further provided to justify the performance improvement of our proposal from various aspects.
APA, Harvard, Vancouver, ISO, and other styles
7

Santhanakrishnan, Ganesh, Ahmed Amer, and Panos K. Chrysanthis. "Self-tuning caching: the Universal Caching algorithm." Software: Practice and Experience 36, no. 11-12 (2006): 1179–88. http://dx.doi.org/10.1002/spe.755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Han, Luchao, Zhichuan Guo, and Xuewen Zeng. "Research on Multicore Key-Value Storage System for Domain Name Storage." Applied Sciences 11, no. 16 (August 12, 2021): 7425. http://dx.doi.org/10.3390/app11167425.

Full text
Abstract:
This article proposes a domain name caching method for the multicore network-traffic capture system, which significantly improves insert latency, throughput and hit rate. The caching method is composed of caching replacement algorithm, cache set method. The method is easy to implement, low in deployment cost, and suitable for various multicore caching systems. Moreover, it can reduce the use of locks by changing data structures and algorithms. Experimental results show that compared with other caching system, our proposed method reaches the highest throughput under multiple cores, which indicates that the cache method we proposed is best suited for domain name caching.
APA, Harvard, Vancouver, ISO, and other styles
9

Soleimani, Somayeh, and Xiaofeng Tao. "Caching and Placement for In-Network Caching in Device-to-Device Communications." Wireless Communications and Mobile Computing 2018 (September 26, 2018): 1–9. http://dx.doi.org/10.1155/2018/9539502.

Full text
Abstract:
Caching content by users constitutes a promising solution to decrease the costly transmissions with going through the base stations (BSs). To improve the performance of in-network caching in device-to-device (D2D) communications, caching placement and content delivery should be jointly optimized. To this end, we jointly optimize caching decision and content discovery strategies by considering the successful content delivery in D2D links for maximizing the in-network caching gain through D2D communications. Moreover, an in-network caching placement problem is formulated as an integer nonlinear optimization problem. To obtain the optimal solution for the proposed problem, Lagrange dual decomposition is applied in order to reduce the complexity. Simulation results show that the proposed algorithm has a near-optimal performance, approaching that of the exhaustive search method. Furthermore, the proposed scheme has a notable in-network caching gain and an improvement in traffic offloading compared to that of other caching placement schemes.
APA, Harvard, Vancouver, ISO, and other styles
10

Naeem, Nor, Hassan, and Kim. "Compound Popular Content Caching Strategy in Named Data Networking." Electronics 8, no. 7 (July 10, 2019): 771. http://dx.doi.org/10.3390/electronics8070771.

Full text
Abstract:
The aim of named data networking (NDN) is to develop an efficient data dissemination approach by implementing a cache module within the network. Caching is one of the most prominent modules of NDN that significantly enhances the Internet architecture. NDN-cache can reduce the expected flood of global data traffic by providing cache storage at intermediate nodes for transmitted contents, making data broadcasting in efficient way. It also reduces the content delivery time by caching popular content close to consumers. In this study, a new content caching mechanism named the compound popular content caching strategy (CPCCS) is proposed for efficient content dissemination and its performance is measured in terms of cache hit ratio, content diversity, and stretch. The CPCCS is extensively and comparatively studied with other NDN-based caching strategies, such as max-gain in-network caching (MAGIC), WAVE popularity-based caching strategy, hop-based probabilistic caching (HPC), LeafPopDown, most popular cache (MPC), cache capacity aware caching (CCAC), and ProbCache through simulations. The results shows that the CPCCS performs better in terms of the cache hit ratio, content diversity ratio, and stretch ratio than all other strategies.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Yantong, and Vasilis Friderikos. "A Survey of Deep Learning for Data Caching in Edge Network." Informatics 7, no. 4 (October 13, 2020): 43. http://dx.doi.org/10.3390/informatics7040043.

Full text
Abstract:
The concept of edge caching provision in emerging 5G and beyond mobile networks is a promising method to deal both with the traffic congestion problem in the core network, as well as reducing latency to access popular content. In that respect, end user demand for popular content can be satisfied by proactively caching it at the network edge, i.e., at close proximity to the users. In addition to model-based caching schemes, learning-based edge caching optimizations have recently attracted significant attention, and the aim hereafter is to capture these recent advances for both model-based and data-driven techniques in the area of proactive caching. This paper summarizes the utilization of deep learning for data caching in edge network. We first outline the typical research topics in content caching and formulate a taxonomy based on network hierarchical structure. Then, many key types of deep learning algorithms are presented, ranging from supervised learning to unsupervised learning, as well as reinforcement learning. Furthermore, a comparison of state-of-the-art literature is provided from the aspects of caching topics and deep learning methods. Finally, we discuss research challenges and future directions of applying deep learning for caching.
APA, Harvard, Vancouver, ISO, and other styles
12

Rim, Minjoong. "Mixed Micro/Macro Cache for Device-to-Device Caching Systems in Multi-Operator Environments." Sensors 24, no. 14 (July 12, 2024): 4518. http://dx.doi.org/10.3390/s24144518.

Full text
Abstract:
In a device-to-device (D2D) caching system that utilizes a device’s available storage space as a content cache, a device called a helper can provide content requested by neighboring devices, thereby reducing the burden on the wireless network. To enhance the efficiency of a limited-size cache, one can consider not only macro caching, which is content-based caching based on content popularity, but also micro caching, which is chunk-based sequential prefetching and stores content chunks slightly behind the one that a nearby device is currently viewing. If the content in a cache can be updated intermittently even during peak hours, the helper can improve the hit ratio by performing micro caching, which stores chunks that are expected to be requested by nearby devices in the near future. In this paper, we discuss the performance and effectiveness of micro D2D caching when there are multiple operators, the helpers can communicate with the devices of other operators, and the operators are under a low load independently of each other. We also discuss the ratio of micro caching in the cache area when the cache space is divided into macro and micro cache areas. Good performance can be achieved by using micro D2D caching in conjunction with macro D2D caching when macro caching alone does not provide sufficient performance, when users are likely to continue viewing the content they are currently viewing, when the content update cycle for the cache is short and a sufficient number of chunks can be updated for micro caching, and when there are multiple operators in the region.
APA, Harvard, Vancouver, ISO, and other styles
13

Chae, Seong Ho, and Wan Choi. "Caching Placement in Stochastic Wireless Caching Helper Networks: Channel Selection Diversity via Caching." IEEE Transactions on Wireless Communications 15, no. 10 (October 2016): 6626–37. http://dx.doi.org/10.1109/twc.2016.2586841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hao, Yixue, Min Chen, Donggang Cao, Wenlai Zhao, Ivan Petrov, Vitaly Antonenko, and Ruslan Smeliansky. "Cognitive-Caching: Cognitive Wireless Mobile Caching by Learning Fine-Grained Caching-Aware Indicators." IEEE Wireless Communications 27, no. 1 (February 2020): 100–106. http://dx.doi.org/10.1109/mwc.001.1900273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bai, Jingpan, Silei Zhu, and Houling Ji. "Blockchain Based Decentralized and Proactive Caching Strategy in Mobile Edge Computing Environment." Sensors 24, no. 7 (April 3, 2024): 2279. http://dx.doi.org/10.3390/s24072279.

Full text
Abstract:
In the mobile edge computing (MEC) environment, the edge caching can provide the timely data response service for the intelligent scenarios. However, due to the limited storage capacity of edge nodes and the malicious node behavior, the question of how to select the cached contents and realize the decentralized security data caching faces challenges. In this paper, a blockchain-based decentralized and proactive caching strategy is proposed in an MEC environment to address this problem. The novelty is that the blockchain was adopted in an MEC environment with a proactive caching strategy based on node utility, and the corresponding optimization problem was built. The blockchain was adopted to build a secure and reliable service environment. The employed methodology is that the optimal caching strategy was achieved based on the linear relaxation technology and the interior point method. Additionally, in a content caching system, there is a trade-off between cache space and node utility, and the caching strategy was proposed to solve this problem. There was also a trade-off between the consensus process delay of blockchain and the caching latency of content. An offline consensus authentication method was adopted to reduce the influence of the consensus process delay on the content caching. The key finding was that the proposed algorithm can reduce latency and can ensure the security data caching in an IoT environment. Finally, the simulation experiment showed that the proposed algorithm can achieve up to 49.32%, 43.11%, and 34.85% improvements on the cache hit rate, the average content response latency, and the average system utility, respectively, compared to the random content caching algorithm, and it achieved up to 9.67%, 8.11%, and 5.95% increases, successively, compared to the greedy content caching algorithm.
APA, Harvard, Vancouver, ISO, and other styles
16

Nguyen, Quang Ngoc, Jiang Liu, Zhenni Pan, Ilias Benkacem, Toshitaka Tsuda, Tarik Taleb, Shigeru Shimamoto, and Takuro Sato. "PPCS: A Progressive Popularity-Aware Caching Scheme for Edge-Based Cache Redundancy Avoidance in Information-Centric Networks." Sensors 19, no. 3 (February 8, 2019): 694. http://dx.doi.org/10.3390/s19030694.

Full text
Abstract:
This article proposes a novel chunk-based caching scheme known as the Progressive Popularity-Aware Caching Scheme (PPCS) to improve content availability and eliminate the cache redundancy issue of Information-Centric Networking (ICN). Particularly, the proposal considers both entire-object caching and partial-progressive caching for popular and non-popular content objects, respectively. In the case that the content is not popular enough, PPCS first caches initial chunks of the content at the edge node and then progressively continues caching subsequent chunks at upstream Content Nodes (CNs) along the delivery path over time, according to the content popularity and each CN position. Therefore, PPCS efficiently avoids wasting cache space for storing on-path content duplicates and improves cache diversity by allowing no more than one replica of a specified content to be cached. To enable a complete ICN caching solution for communication networks, we also propose an autonomous replacement policy to optimize the cache utilization by maximizing the utility of each CN from caching content items. By simulation, we show that PPCS, utilizing edge-computing for the joint optimization of caching decision and replacement policies, considerably outperforms relevant existing ICN caching strategies in terms of latency (number of hops), cache redundancy, and content availability (hit rate), especially when the CN’s cache size is small.
APA, Harvard, Vancouver, ISO, and other styles
17

Piastou, Mikita. "Evaluating the Efficiency of Caching Strategies in Reducing Application Latency." Journal of Science & Technology 4, no. 6 (November 6, 2023): 83–98. http://dx.doi.org/10.55662/jst.2023.4606.

Full text
Abstract:
The paper discusses the efficiency of various caching strategies that can reduce application latency. A test application was developed for this purpose to measure latency from various conditions using logging and profiling tools. These scenario tests simulated high traffic loads, large data sets, and frequent access patterns. The simulation was done in Java; accordingly, T-tests and ANOVA were conducted in order to measure the significance of the results. The findings showed that the highest reduction in latency was achieved by in-memory caching: response time improved by up to 62.6% compared to non-cached scenarios. File-based caching decreased request processing latency by about 36.6%, while database caching provided an improvement of 55.1%. These results enhance the huge benefits stemming from the application of various caching mechanisms. In-memory caching proved most efficient in high-speed data access applications. On the other hand, file-based and database caching proved to be more useful in certain content-heavy scenarios. This research study provides some insight for developers on how to identify proper caching mechanisms and implementation to further boost responsiveness and efficiency of applications. Other recommendations for improvements to be made on the cache involve hybrid caching strategies, optimization of the eviction policies further, and integrating mechanisms with edge computing for even better performance.
APA, Harvard, Vancouver, ISO, and other styles
18

Yan, Li, and Yan Sheng Qu. "Research on Caching Mechanism Based on User Community." Applied Mechanics and Materials 672-674 (October 2014): 2013–16. http://dx.doi.org/10.4028/www.scientific.net/amm.672-674.2013.

Full text
Abstract:
This paper produced a data caching system framework based on two-layer Chord. Caching is shared by users in domain and hot accessing information is shared by inter-domain users. It effectively reduces the caching system’s overhead. We also introduced cache replacement algorithm based on the user community, especially the user’s influence in the community and the information flow dynamics. The result of simulation and experiment of test-bed environment shows the caching scheme based on user community outperforms most existing distributed caching schemes.
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Qi, Xiaoxiang Wang, Dongyu Wang, Yibo Zhang, Yanwen Lan, Qiang Liu, and Lei Song. "Analysis of an SDN-Based Cooperative Caching Network with Heterogeneous Contents." Electronics 8, no. 12 (December 6, 2019): 1491. http://dx.doi.org/10.3390/electronics8121491.

Full text
Abstract:
The ubiquity of data-enabled mobile devices and wireless-enabled data applications has fostered the rapid development of wireless content caching, which is an efficient approach to mitigating cellular traffic pressure. Considering the content characteristics and real caching circumstances, a software-defined network (SDN)-based cooperative caching system is presented. First, we define a new file block library with heterogeneous content attributes [file popularity, mobile user (MU) preference, file size]. An SDN-based three-tier caching network is presented in which the base station supplies control coverage for the entire macrocell and cache helpers (CHs), MUs with cache capacities offer data coverage. Using the ‘most popular content’ and ‘largest diversity content’, a distributed cooperative caching strategy is proposed in which the caches of the MUs store the most popular contents of the file block library to mitigate the effect of MU mobility, and those of the CHs store the remaining contents in a probabilistic caching manner to enrich the content diversity and reduce the MU caching pressure. The request meet probability (RMPro) is subsequently proposed, and the optimal caching distribution of the contents in the probabilistic caching strategy is obtained via optimization. Finally, using the result of RMPro optimization, we also analyze the content retrieval delays that occur when a typical MU requests a file block or a whole file. Simulation results demonstrate that the proposed caching system can achieve quasi-optimal revenue performance compared with other contrasting schemes.
APA, Harvard, Vancouver, ISO, and other styles
20

Al-Sayeh, Hani, Muhammad Attahir Jibril, Muhammad Waleed Bin Saeed, and Kai-Uwe Sattler. "SparkCAD." Proceedings of the VLDB Endowment 15, no. 12 (August 2022): 3694–97. http://dx.doi.org/10.14778/3554821.3554877.

Full text
Abstract:
Developers of Apache Spark applications can accelerate their workloads by caching suitable intermediate results in memory and reusing them rather than recomputing them all over again every time they are needed. However, as scientific workflows are becoming more complex, application developers are becoming more prone to making wrong caching decisions, which we refer to as caching anomalies , that lead to poor performance. We present and give a demonstration of Spark Caching Anomalies Detector (SparkCAD) , a developer decision support tool that visualizes the logical plan of Spark applications and detects caching anomalies.
APA, Harvard, Vancouver, ISO, and other styles
21

Kuznetsov, O. V., L. E. Chala, and S. G. Udovenko. "Neural network data caching method." Bionics of Intelligence 1, no. 90 (June 2, 2018): 84–90. https://doi.org/10.30837/bi.2018.1(90).12.

Full text
Abstract:
Main existing types of caching and algorithms of saving cached data were analyzed. The new type of caching based on neural networks was proposed. results of proof of concept project were analyzed. Proposed type of caching was reviewed as a solution to a problem of Belady algorithm. The scale of subject area to use neural type of caching was determined.
APA, Harvard, Vancouver, ISO, and other styles
22

Naeem, Muhammad, Rashid Ali, Byung-Seo Kim, Shahrudin Nor, and Suhaidi Hassan. "A Periodic Caching Strategy Solution for the Smart City in Information-Centric Internet of Things." Sustainability 10, no. 7 (July 23, 2018): 2576. http://dx.doi.org/10.3390/su10072576.

Full text
Abstract:
Named Data Networking is an evolving network model of the Information-centric networking (ICN) paradigm which provides Named-based data contents. In-network caching is the responsible for dissemination of these contents in a scalable and cost-efficient way. Due to the rapid expansion of Internet of Things (IoT) traffic, ICN is envisioned to be an appropriate architecture to maintain the IoT networks. In fact, ICN offers unique naming, multicast communications and, most beneficially, in-network caching that minimizes the response latency and server load. IoT environment involves a study of ICN caching policies in terms of content placement strategies. This paper addressed the caching strategies with the aim to recognize which caching strategy is the most suitable for IoT networks. Simulation results show the impact of different IoT ICN-based caching strategies, out of these; periodic caching is the most appropriate strategy for IoT environments in terms of stretch that results in decreasing the retrieval latency and improves the cache-hit ratio.
APA, Harvard, Vancouver, ISO, and other styles
23

Krause, Douglas J., and Tracey L. Rogers. "Food caching by a marine apex predator, the leopard seal (Hydrurga leptonyx)." Canadian Journal of Zoology 97, no. 6 (June 2019): 573–78. http://dx.doi.org/10.1139/cjz-2018-0203.

Full text
Abstract:
The foraging behaviors of apex predators can fundamentally alter ecosystems through cascading predator–prey interactions. Food caching is a widely studied, taxonomically diverse behavior that can modify competitive relationships and affect population viability. We address predictions that food caching would not be observed in the marine environment by summarizing recent caching reports from two marine mammal and one marine reptile species. We also provide multiple caching observations from disparate locations for a fourth marine predator, the leopard seal (Hydrurga leptonyx (de Blainville, 1820)). Drawing from consistent patterns in the terrestrial literature, we suggest the unusual diversity of caching strategies observed in leopard seals is due to high variability in their polar marine habitat. We hypothesize that caching is present across the spectrum of leopard seal social dominance; however, prevalence is likely to increase in smaller, less-dominant animals that hoard to gain competitive advantage. Given the importance of this behavior, we draw attention to the high probability of observing food caching behavior in other marine species.
APA, Harvard, Vancouver, ISO, and other styles
24

Ma, Zhenjie, Haoran Wang, Ke Shi, and Xinda Wang. "Learning Automata Based Caching for Efficient Data Access in Delay Tolerant Networks." Wireless Communications and Mobile Computing 2018 (2018): 1–19. http://dx.doi.org/10.1155/2018/3806907.

Full text
Abstract:
Effective data access is one of the major challenges in Delay Tolerant Networks (DTNs) that are characterized by intermittent network connectivity and unpredictable node mobility. Currently, different data caching schemes have been proposed to improve the performance of data access in DTNs. However, most existing data caching schemes perform poorly due to the lack of global network state information and the changing network topology in DTNs. In this paper, we propose a novel data caching scheme based on cooperative caching in DTNs, aiming at improving the successful rate of data access and reducing the data access delay. In the proposed scheme, learning automata are utilized to select a set of caching nodes as Caching Node Set (CNS) in DTNs. Unlike the existing caching schemes failing to address the challenging characteristics of DTNs, our scheme is designed to automatically self-adjust to the changing network topology through the well-designed voting and updating processes. The proposed scheme improves the overall performance of data access in DTNs compared with the former caching schemes. The simulations verify the feasibility of our scheme and the improvements in performance.
APA, Harvard, Vancouver, ISO, and other styles
25

Keller, Robert M., and M. R. Sleep. "Applicative caching." ACM Transactions on Programming Languages and Systems 8, no. 1 (January 2, 1986): 88–108. http://dx.doi.org/10.1145/5001.5004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

DeBrabant, Justin, Andrew Pavlo, Stephen Tu, Michael Stonebraker, and Stan Zdonik. "Anti-caching." Proceedings of the VLDB Endowment 6, no. 14 (September 2013): 1942–53. http://dx.doi.org/10.14778/2556549.2556575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Gray, Howard Richard. "Geo-Caching." Journal of Museum Education 32, no. 3 (September 2007): 285–91. http://dx.doi.org/10.1080/10598650.2007.11510578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Englert, Matthias, Heiko Röglin, Jacob Spönemann, and Berthold Vöcking. "Economical Caching." ACM Transactions on Computation Theory 5, no. 2 (July 2013): 1–21. http://dx.doi.org/10.1145/2493246.2493247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Afek, Yehuda, Geoffrey Brown, and Michael Merritt. "Lazy caching." ACM Transactions on Programming Languages and Systems 15, no. 1 (January 1993): 182–205. http://dx.doi.org/10.1145/151646.151651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Barr, Thomas W., Alan L. Cox, and Scott Rixner. "Translation caching." ACM SIGARCH Computer Architecture News 38, no. 3 (June 19, 2010): 48–59. http://dx.doi.org/10.1145/1816038.1815970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Srinath, Harsha, and Shiva Shankar Ramanna. "Web caching." Resonance 7, no. 7 (July 2002): 54–62. http://dx.doi.org/10.1007/bf02836754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Nanda, Pranay, Shamsher Singh, and G. L. Saini. "A Review of Web Caching Techniques and Caching Algorithms for Effective and Improved Caching." International Journal of Computer Applications 128, no. 10 (October 15, 2015): 41–45. http://dx.doi.org/10.5120/ijca2015906656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Jiaqi, Wenjing Liu, Li Zhang, and Jie Tian. "Enhanced In-Network Caching for Deep Learning in Edge Networks." Electronics 13, no. 23 (November 24, 2024): 4632. http://dx.doi.org/10.3390/electronics13234632.

Full text
Abstract:
With the deep integration of communication technology and Internet of Things technology, the edge network structure is becoming increasingly dense and heterogeneous. At the same time, in the edge network environment, characteristics such as wide-area differentiated services, decentralized deployment of computing and network resources, and highly dynamic network environment lead to the deployment of redundant or insufficient edge cache nodes, which restricts the efficiency of network service caching and resource allocation. In response to the above problems, research on the joint optimization of service caching and resources in the decentralized edge network scenario is carried out. Therefore, we have conducted research on the collaborative caching of training data among multiple edge nodes and optimized the number of collaborative caching nodes. Firstly, we use a multi-queue model to model the collaborative caching process. This model can be used to simulate the in-network cache replacement process on collaborative caching nodes. In this way, we can describe the data flow and storage changes during the caching process more clearly. Secondly, considering the limitation of storage space of edge nodes and the demand for training data within a training epoch, we propose a stochastic gradient descent algorithm to obtain the optimal number of caching nodes. This algorithm entirely takes into account the resource constraints in practical applications and provides an effective way to optimize the number of caching nodes. Finally, the simulation results clearly show that the optimized number of caching nodes can significantly improve the adequacy rate and hit rate of the training data, with the adequacy rate reaching 84% and the hit rate reaching 100%.
APA, Harvard, Vancouver, ISO, and other styles
34

Kim, Yunkon, and Eui-Nam Huh. "EDCrammer: An Efficient Caching Rate-Control Algorithm for Streaming Data on Resource-Limited Edge Nodes." Applied Sciences 9, no. 12 (June 23, 2019): 2560. http://dx.doi.org/10.3390/app9122560.

Full text
Abstract:
This paper explores data caching as a key factor of edge computing. State-of-the-art research of data caching on edge nodes mainly considers reactive and proactive caching, and machine learning based caching, which could be a heavy task for edge nodes. However, edge nodes usually have relatively lower computing resources than cloud datacenters as those are geo-distributed from the administrator. Therefore, a caching algorithm should be lightweight for saving computing resources on edge nodes. In addition, the data caching should be agile because it has to support high-quality services on edge nodes. Accordingly, this paper proposes a lightweight, agile caching algorithm, EDCrammer (Efficient Data Crammer), which performs agile operations to control caching rate for streaming data by using the enhanced PID (Proportional-Integral-Differential) controller. Experimental results using this lightweight, agile caching algorithm show its significant value in each scenario. In four common scenarios, the desired cache utilization was reached in 1.1 s on average and then maintained within a 4–7% deviation. The cache hit ratio is about 96%, and the optimal cache capacity is around 1.5 MB. Thus, EDCrammer can help distribute the streaming data traffic to the edge nodes, mitigate the uplink load on the central cloud, and ultimately provide users with high-quality video services. We also hope that EDCrammer can improve overall service quality in 5G environment, Augmented Reality/Virtual Reality (AR/VR), Intelligent Transportation System (ITS), Internet of Things (IoT), etc.
APA, Harvard, Vancouver, ISO, and other styles
35

Man, Dapeng, Yao Wang, Hanbo Wang, Jiafei Guo, Jiguang Lv, Shichang Xuan, and Wu Yang. "Information-Centric Networking Cache Placement Method Based on Cache Node Status and Location." Wireless Communications and Mobile Computing 2021 (September 14, 2021): 1–13. http://dx.doi.org/10.1155/2021/5648765.

Full text
Abstract:
Information-Centric Networking with caching is a very promising future network architecture. The research on its cache deployment strategy is divided into three categories, namely, noncooperative cache, explicit collaboration cache, and implicit collaboration cache. Noncooperative caching can cause problems such as high content repetition rate in the web cache space. Explicit collaboration caching generally reflects the best caching effect but requires a lot of communication to satisfy the exchange of cache node information and depends on the controller to perform the calculation. On this basis, implicit cooperative caching can reduce the information exchange and calculation between cache nodes while maintaining a good caching effect. Therefore, this paper proposes an on-path implicit cooperative cache deployment method based on the dynamic LRU-K cache replacement strategy. This method evaluates the cache nodes based on their network location and state and selects the node with the best state value on the transmission path for caching. Each request will only select one or two nodes for caching on the request path to reduce the redundancy of the data. Simulation experiments show that the cache deployment method based on the state and location of the cache node can improve the hit rate and reduce the average request length.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhou, Tianchi, Peng Sun, and Rui Han. "An Active Path-Associated Cache Scheme for Mobile Scenes." Future Internet 14, no. 2 (January 19, 2022): 33. http://dx.doi.org/10.3390/fi14020033.

Full text
Abstract:
With the widespread growth of mass content, information-centric networks (ICN) have become one of the research hotspots of future network architecture. One of the important features of ICN is ubiquitous in-network caching. In recent years, the explosive growth of mobile devices has brought content dynamics, which poses a new challenge to the original ICN caching mechanism. This paper focuses on the WiFi mobile scenario of ICN. We design a new path-associated active caching scheme to shorten the time delay of users obtaining content to enhance the user experience. In this article, based on the WiFi scenario, we first propose a solution for neighbor access point selection from a theoretical perspective, considering the caching cost and transition probability. The cache content is then forwarded based on the selected neighbor set. For cached content, we propose content freshness according to mobile characteristics and consider content popularity at the same time. For cache nodes, we focus on the size of the remaining cache space and the number of hops from the cache to the user. We have implemented this strategy based on the value of caching on the forwarding path. The simulation results show that our caching strategy has a significant improvement in performance compared with active caching and other caching strategies.
APA, Harvard, Vancouver, ISO, and other styles
37

Zheng, Yi-fan, Ning Wei, and Yi Liu. "Collaborative Computation for Offloading and Caching Strategy Using Intelligent Edge Computing." Mobile Information Systems 2022 (July 30, 2022): 1–12. http://dx.doi.org/10.1155/2022/4840801.

Full text
Abstract:
Computation offloading and caching strategy is a well-established concept for allowing mobile applications that are high in resources. Furthermore, the unloaded duties can be replicated when several customers are within easy access because of the rising mobile cooperation applications. However, the problematic characteristics of offloading and caching strategy delay bandwidth transfer from mobile computing devices to cloud computing. A new technical approach to restrict the issues and unwanted functions in offloading and caching is called the intellectual power computing framework (IPCF). IPCF depends on two conventional offloading and caching strategies called systematic offloading technique and managerial migrant caching. The migration of data transfer from the destination to location basis lies in the systematic offloading technique to restrict network delays. Managerial migrant caching duplicates the data required by the mobile terminals (MTs) from the remote cloud storage to the mobile application to reduce the access time. The forbidden actions in current techniques are refused, and solutions are enhanced for better communication strategy. Thus, the simulation analysis performs better in IPCF to reach efficient outcomes for offloading and caching processes.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Xinyu, Zhigang Hu, Meiguang Zheng, Yang Liang, Hui Xiao, Hao Zheng, and Aikun Xu. "LFDC: Low-Energy Federated Deep Reinforcement Learning for Caching Mechanism in Cloud–Edge Collaborative." Applied Sciences 13, no. 10 (May 16, 2023): 6115. http://dx.doi.org/10.3390/app13106115.

Full text
Abstract:
The optimization of caching mechanisms has long been a crucial research focus in cloud–edge collaborative environments. Effective caching strategies can substantially enhance user experience quality in these settings. Deep reinforcement learning (DRL), with its ability to perceive the environment and develop intelligent policies online, has been widely employed for designing caching strategies. Recently, federated learning, when combined with DRL, has been in gaining popularity for optimizing caching strategies and protecting data training privacy from eavesdropping attacks. However, online federated deep reinforcement learning algorithms face high environmental dynamics, and real-time training can result in increased training energy consumption despite improving caching efficiency. To address this issue, we propose a low-energy federated deep reinforcement learning strategy for caching mechanisms (LFDC) that balances caching efficiency and training energy consumption. The LFDC strategy encompasses a novel energy efficiency model, a deep reinforcement learning mechanism, and a dynamic energy-saving federated policy. Our experimental results demonstrate that the proposed LFDC strategy significantly outperforms existing benchmarks in terms of energy efficiency.
APA, Harvard, Vancouver, ISO, and other styles
39

Naeem, Muhammad Ali, Rehmat Ullah, Sushank Chudhary, and Yahui Meng. "A Critical Analysis of Cooperative Caching in Ad Hoc Wireless Communication Technologies: Current Challenges and Future Directions." Sensors 25, no. 4 (February 19, 2025): 1258. https://doi.org/10.3390/s25041258.

Full text
Abstract:
The exponential growth of wireless traffic has imposed new technical challenges on the Internet and defined new approaches to dealing with its intensive use. Caching, especially cooperative caching, has become a revolutionary paradigm shift to advance environments based on wireless technologies to enable efficient data distribution and support the mobility, scalability, and manageability of wireless networks. Mobile ad hoc networks (MANETs), wireless mesh networks (WMNs), Wireless Sensor Networks (WSNs), and Vehicular ad hoc Networks (VANETs) have adopted caching practices to overcome these hurdles progressively. In this paper, we discuss the problems and issues in the current wireless ad hoc paradigms as well as spotlight versatile cooperative caching as the potential solution to the increasing complications in ad hoc networks. We classify and discuss multiple cooperative caching schemes in distinct wireless communication contexts and highlight the advantages of applicability. Moreover, we identify research directions to further study and enhance caching mechanisms concerning new challenges in wireless networks. This extensive review offers useful findings on the design of sound caching strategies in the pursuit of enhancing next-generation wireless networks.
APA, Harvard, Vancouver, ISO, and other styles
40

Amey Pophali. "Distributed caching strategies to enhance E-commerce transaction speed." World Journal of Advanced Research and Reviews 26, no. 2 (May 30, 2025): 1860–71. https://doi.org/10.30574/wjarr.2025.26.2.1809.

Full text
Abstract:
Distributed caching represents a critical architectural strategy for enhancing transaction speed in e-commerce environments. This article examines how strategically positioning frequently accessed data across multiple networked nodes significantly reduces latency while decreasing database load. The assessment framework developed for evaluating caching technologies incorporates both quantitative performance metrics and practical implementation considerations specific to e-commerce workloads. Results demonstrate that in-memory solutions consistently outperform disk-based alternatives, with hybrid caching architectures showing superior performance when aligned with specific data requirements. Economic analysis reveals compelling justifications for distributed caching across various business scales, though implementation challenges including consistency management, cold-start phenomena, and security implications must be addressed. Future directions point toward machine learning for predictive caching, edge computing integration, serverless compatibility, and advanced invalidation strategies that promise to further optimize distributed caching capabilities for next-generation e-commerce platforms.
APA, Harvard, Vancouver, ISO, and other styles
41

Zulfa, Mulki Indana, Rudy Hartanto, and Adhistya Erna Permanasari. "Caching strategy for Web application – a systematic literature review." International Journal of Web Information Systems 16, no. 5 (October 5, 2020): 545–69. http://dx.doi.org/10.1108/ijwis-06-2020-0032.

Full text
Abstract:
Purpose Internet users and Web-based applications continue to grow every day. The response time on a Web application really determines the convenience of its users. Caching Web content is one strategy that can be used to speed up response time. This strategy is divided into three main techniques, namely, Web caching, Web prefetching and application-level caching. The purpose of this paper is to put forward a literature review of caching strategy research that can be used in Web-based applications. Design/methodology/approach The methods used in this paper were as follows: determined the review method, conducted a review process, pros and cons analysis and explained conclusions. The review method is carried out by searching literature from leading journals and conferences. The first search process starts by determining keywords related to caching strategies. To limit the latest literature in accordance with current developments in website technology, search results are limited to the past 10 years, in English only and related to computer science only. Findings Note in advance that Web caching and Web prefetching are slightly overlapping techniques because they have the same goal of reducing latency on the user’s side. But actually, the two techniques are motivated by different basic mechanisms. Web caching uses the basic mechanism of cache replacement or the algorithm to change cache objects in memory when the cache capacity is full, whereas Web prefetching uses the basic mechanism of predicting cache objects that can be accessed in the future. This paper also contributes practical guidelines for choosing the appropriate caching strategy for Web-based applications. Originality/value This paper conducts a state-of-the art review of caching strategies that can be used in Web applications. Exclusively, this paper presents taxonomy, pros and cons of selected research and discusses data sets that are often used in caching strategy research. This paper also provides another contribution, namely, practical instructions for Web developers to decide the caching strategy.
APA, Harvard, Vancouver, ISO, and other styles
42

Hurly, T. Andrew, and Raleigh J. Robertson. "Scatterhoarding by territorial red squirrels: a test of the optimal density model." Canadian Journal of Zoology 65, no. 5 (May 1, 1987): 1247–52. http://dx.doi.org/10.1139/z87-194.

Full text
Abstract:
We observed a high degree of scatterhoarding in a population of red squirrels and tested two predictions of the Optimal Density Model (ODM): (1) large food items will be cached at a greater distance from their source than small items; and (2) caches will be uniformly distributed about their source. Caching experiments supported prediction 1. Red squirrels carried large food items farther than small items before caching them. Prediction 2 was not supported; caches were distributed nonuniformly about their source both within and among caching bouts. We present a simple null model for scatterhoarding, which demonstrates that prediction 1 is not exclusive to the Optimal Density Model. Analyses of our cone-caching data and published data suggested that "optimal densities" were not the primary goal of the caching animal, but rather the result of a positive relationship between food value and investment in caching (carrying distance).
APA, Harvard, Vancouver, ISO, and other styles
43

Park, Seongsoo, Minseop Jeong, and Hwansoo Han. "CCA: Cost-Capacity-Aware Caching for In-Memory Data Analytics Frameworks." Sensors 21, no. 7 (March 26, 2021): 2321. http://dx.doi.org/10.3390/s21072321.

Full text
Abstract:
To process data from IoTs and wearable devices, analysis tasks are often offloaded to the cloud. As the amount of sensing data ever increases, optimizing the data analytics frameworks is critical to the performance of processing sensed data. A key approach to speed up the performance of data analytics frameworks in the cloud is caching intermediate data, which is used repeatedly in iterative computations. Existing analytics engines implement caching with various approaches. Some use run-time mechanisms with dynamic profiling and others rely on programmers to decide data to cache. Even though caching discipline has been investigated long enough in computer system research, recent data analytics frameworks still leave a room to optimize. As sophisticated caching should consider complex execution contexts such as cache capacity, size of data to cache, victims to evict, etc., no general solution often exists for data analytics frameworks. In this paper, we propose an application-specific cost-capacity-aware caching scheme for in-memory data analytics frameworks. We use a cost model, built from multiple representative inputs, and an execution flow analysis, extracted from DAG schedule, to select primary candidates to cache among intermediate data. After the caching candidate is determined, the optimal caching is automatically selected during execution even if the programmers no longer manually determine the caching for the intermediate data. We implemented our scheme in Apache Spark and experimentally evaluated our scheme on HiBench benchmarks. Compared to the caching decisions in the original benchmarks, our scheme increases the performance by 27% on sufficient cache memory and by 11% on insufficient cache memory, respectively.
APA, Harvard, Vancouver, ISO, and other styles
44

Sheraz, Muhammad, Shahryar Shafique, Sohail Imran, Muhammad Asif, Rizwan Ullah, Muhammad Ibrar, Jahanzeb Khan, and Lunchakorn Wuttisittikulkij. "A Reinforcement Learning Based Data Caching in Wireless Networks." Applied Sciences 12, no. 11 (June 3, 2022): 5692. http://dx.doi.org/10.3390/app12115692.

Full text
Abstract:
Data caching has emerged as a promising technique to handle growing data traffic and backhaul congestion of wireless networks. However, there is a concern regarding how and where to place contents to optimize data access by the users. Data caching can be exploited close to users by deploying cache entities at Small Base Stations (SBSs). In this approach, SBSs cache contents through the core network during off-peak traffic hours. Then, SBSs provide cached contents to content-demanding users during peak traffic hours with low latency. In this paper, we exploit the potential of data caching at the SBS level to minimize data access delay. We propose an intelligence-based data caching mechanism inspired by an artificial intelligence approach known as Reinforcement Learning (RL). Our proposed RL-based data caching mechanism is adaptive to dynamic learning and tracks network states to capture users’ diverse and varying data demands. Our proposed approach optimizes data caching at the SBS level by observing users’ data demands and locations to efficiently utilize the limited cache resources of SBS. Extensive simulations are performed to evaluate the performance of proposed caching mechanism based on various factors such as caching capacity, data library size, etc. The obtained results demonstrate that our proposed caching mechanism achieves 4% performance gain in terms of delay vs. contents, 3.5% performance gain in terms of delay vs. users, 2.6% performance gain in terms of delay vs. cache capacity, 18% performance gain in terms of percentage traffic offloading vs. popularity skewness (γ), and 6% performance gain in terms of backhaul saving vs. cache capacity.
APA, Harvard, Vancouver, ISO, and other styles
45

Thodupunuri, Mohit. "Security and Performance in Modern CDN Caching: A Study of Akamai?s Caching Infrastructure." International Journal of Science and Research (IJSR) 14, no. 1 (January 27, 2025): 715–18. https://doi.org/10.21275/sr25114224021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Dr., H. B. Patelpaik. "Machine Learning-Based Optimization of Web Caching: A Support Vector Machine Model." International Journal of Advance and Applied Research S6, no. 18 (April 10, 2025): 282–88. https://doi.org/10.5281/zenodo.15259563.

Full text
Abstract:
<em>In the era of information technology, the Internet serves as a critical medium for accessing information globally. The World Wide Web (WWW) facilitates a diverse range of Internet-based services, including e-commerce, online banking, entertainment, education, and e-governance. However, the exponential growth in web applications has led to a substantial increase in network traffic, causing congestion and elevating server loads. This, in turn, results in higher response times, thereby negatively impacting user experience. Web caching has emerged as an effective solution to mitigate latency issues by storing frequently accessed web objects closer to end users. Traditional caching strategies, such as Least Recently Used (LRU), Least Frequently Used (LFU), SIZE, GD-Size, and GDSF, have been widely implemented to enhance web system performance. However, recent advancements in machine learning have significantly improved conventional web proxy caching policies. Support Vector Machine (SVM), a robust supervised machine learning algorithm, is extensively utilized for both classification and regression tasks. By integrating conventional caching policies with SVM-based predictive models, intelligent caching approaches have been developed. These models are evaluated using trace-driven simulations, and their performance is systematically compared with traditional web proxy caching techniques. The empirical findings indicate that SVM-enhanced caching strategies yield substantial performance improvements, demonstrating the efficacy of machine learning in optimizing web caching systems.</em>
APA, Harvard, Vancouver, ISO, and other styles
47

Yin, Jiliang, Congfeng Jiang, Hidetoshi Mino, and Christophe Cérin. "Popularity-Aware In-Network Caching for Edge Named Data Network." Wireless Communications and Mobile Computing 2021 (August 30, 2021): 1–13. http://dx.doi.org/10.1155/2021/3791859.

Full text
Abstract:
The traditional centralized network architecture can lead to a bandwidth bottleneck in the core network. In contrast, in the information-centric network, decentralized in-network caching can alleviate the traffic flow pressure from the network center to the edge. In this paper, a popularity-aware in-network caching policy, namely, Pop, is proposed to achieve an optimal caching of network contents in the resource-constrained edge networks. Specifically, Pop senses content popularity and distributes content caching without adding additional hardware and traffic overhead. We conduct extensive performance evaluation experiments by using ndnSIM. The experiments showed that the Pop policy achieves 54.39% cloud service hit reduction ratio and 22.76% user request average hop reduction ratio and outperforms other policies including Leave Copy Everywhere, Leave Copy Down, Probabilistic Caching, and Random choice caching. In addition, we proposed an ideal caching policy (Ideal) as a baseline whose popularity is known in advance; the gap of Pop and Ideal in cloud service hit reduction ratio is 4.36%, and the gap in user request average hop reduction ratio is only 1.47%. More simulation results further show the accuracy of Pop in perceiving popularity of contents, and Pop has good robustness in different request scenarios.
APA, Harvard, Vancouver, ISO, and other styles
48

Jia, Qingmin, RenChao Xie, Tao Huang, Jiang Liu, and Yunjie Liu. "Caching Resource Sharing for Network Slicing in 5G Core Network." Journal of Organizational and End User Computing 31, no. 4 (October 2019): 1–18. http://dx.doi.org/10.4018/joeuc.2019100101.

Full text
Abstract:
Network slicing has been considered a promising technology in next generation mobile networks (5G), which can create virtual networks and provide customized service on demand. Most existing works on network slicing mainly focus on virtualization technology, and have not considered in-network caching well. However, in-network caching, as the one of the key technologies for information-centric networking (ICN), has been considered as a significant approach in 5G network to cope with the traffic explosion and network challenges. In this article, the authors jointly consider in-network caching combining with network slicing. They propose an efficient caching resource sharing scheme for network slicing in 5G core network, aiming at solving the problem of how to efficiently share the limited physical caching resource of Infrastructure Provider (InP) among multiple network slices. In addition, from the perspective of network slicing, the authors formulate caching resource sharing problem as a non-cooperative game, and propose an iteration algorithm based on caching resource updating to obtain the Nash Equilibrium solution. Simulation results show that the proposed algorithm has good convergence performance, and illustrate the effectiveness of the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
49

Naeem, Muhammad Ali, Yahui Meng, and Sushank Chaudhary. "The Impact of Federated Learning on Improving the IoT-Based Network in a Sustainable Smart Cities." Electronics 13, no. 18 (September 13, 2024): 3653. http://dx.doi.org/10.3390/electronics13183653.

Full text
Abstract:
The caching mechanism of federated learning in smart cities is vital for improving data handling and communication in IoT environments. Because it facilitates learning among separately connected devices, federated learning makes it possible to quickly update caching strategies in response to data usage without invading users’ privacy. Federated learning caching promotes improved dynamism, effectiveness, and data reachability for smart city services to function properly. In this paper, a new caching strategy for Named Data Networking (NDN) based on federated learning in smart cities’ IoT contexts is proposed and described. The proposed strategy seeks to apply a federated learning technique to improve content caching more effectively based on its popularity, thereby improving its performance on the network. The proposed strategy was compared to the benchmark in terms of the cache hit ratio, delay in content retrieval, and energy utilization. These benchmarks evidence that the suggested caching strategy performs far better than its counterparts in terms of cache hit rates, the time taken to fetch the content, and energy consumption. These enhancements result in smarter and more efficient smart city networks, a clear indication of how federated learning can revolutionize content caching in NDN-based IoT.
APA, Harvard, Vancouver, ISO, and other styles
50

Dinh, Ngoc-Thanh, and Young-Han Kim. "An Efficient Correlation-Based Cache Retrieval Scheme at the Edge for Internet of Things." Sensors 20, no. 23 (November 30, 2020): 6846. http://dx.doi.org/10.3390/s20236846.

Full text
Abstract:
Existing caching mechanisms considers content objects individually without considering the semantic correlation among content objects. We argue that this approach can be inefficient in Internet of Things due to the highly redundant nature of IoT device deployments and the data accuracy tolerance of IoT applications. In many IoT applications, an approximate answer is acceptable. Therefore, a cache of an information object having a high semantic correlation with the requested information object can be used instead of a cache of the exact requested information object. In this case, caching both of the information objects can be inefficient and redundant. This paper proposes a caching retrieval scheme which considers the semantic information correlation of information objects of nodes for cache retrieval. We illustrate the benefits of considering the semantic information correlation in caching by studying IoT data caching at the edge. Our experiments and analysis show that semantic correlated caching can significantly improve the efficiency, cache hit, and reduce the resource consumption of IoT devices.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!