To see the other types of publications on this topic, follow the link: Cache replacement algorithms.

Journal articles on the topic 'Cache replacement algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Cache replacement algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lyons, Steven, and Raju Rangaswami. "To Cache or Not to Cache." Algorithms 17, no. 7 (2024): 301. http://dx.doi.org/10.3390/a17070301.

Full text
Abstract:
Unlike conventional CPU caches, non-datapath caches, such as host-side flash caches which are extensively used as storage caches, have distinct requirements. While every cache miss results in a cache update in a conventional cache, non-datapath caches allow for the flexibility of selective caching, i.e., the option of not having to update the cache on each miss. We propose a new, generalized, bimodal caching algorithm, Fear Of Missing Out (FOMO), for managing non-datapath caches. Being generalized has the benefit of allowing any datapath cache replacement policy, such as LRU, ARC, or LIRS, to
APA, Harvard, Vancouver, ISO, and other styles
2

WANG, JAMES Z., and VIPUL BHULAWALA. "DESIGN AND IMPLEMENTATION OF A P2P COOPERATIVE PROXY CACHE SYSTEM." Journal of Interconnection Networks 08, no. 02 (2007): 147–62. http://dx.doi.org/10.1142/s0219265907001953.

Full text
Abstract:
In this paper, we design and implement a P2P cooperative proxy caching system based on a novel P2P cooperative proxy caching scheme. To effectively locate the cached web documents, a TTL-based routing protocol is proposed to manage the query and response messages in the P2P cooperative proxy cache system. Furthermore, we design a predict query-route algorithm to improve the TTL-based routing protocol by adding extra information in the query message packets. To select a suitable cache replacement algorithm for the P2P cooperative proxy cache system, three different cache replacement algorithms,
APA, Harvard, Vancouver, ISO, and other styles
3

Vishnekov, A. V., and E. M. Ivanova. "DYNAMIC CONTROL METHODS OF CACHE LINES REPLACEMENT POLICY." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 191 (May 2020): 49–56. http://dx.doi.org/10.14489/vkit.2020.05.pp.049-056.

Full text
Abstract:
The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequentia
APA, Harvard, Vancouver, ISO, and other styles
4

Vishnekov, A. V., and E. M. Ivanova. "DYNAMIC CONTROL METHODS OF CACHE LINES REPLACEMENT POLICY." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 191 (May 2020): 49–56. http://dx.doi.org/10.14489/vkit.2020.05.pp.049-056.

Full text
Abstract:
The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequentia
APA, Harvard, Vancouver, ISO, and other styles
5

Prihozhy, A. A. "Simulation of direct mapped, k-way and fully associative cache on all pairs shortest paths algorithms." «System analysis and applied information science», no. 4 (December 30, 2019): 10–18. http://dx.doi.org/10.21122/2309-4923-2019-4-10-18.

Full text
Abstract:
Caches are intermediate level between fast CPU and slow main memory. It aims to store copies of frequently used data and to reduce the access time to the main memory. Caches are capable of exploiting temporal and spatial localities during program execution. When the processor accesses memory, the cache behavior depends on if the data is in cache: a cache hit occurs if it is, and, a cache miss occurs, otherwise. In the last case, the cache may have to evict other data. The misses produce processor stalls and slow down the computations. The replacement policy chooses a data to evict, trying to p
APA, Harvard, Vancouver, ISO, and other styles
6

Begum, B. Shameedha, and N. Ramasubramanian. "Design of an Intelligent Data Cache with Replacement Policy." International Journal of Embedded and Real-Time Communication Systems 10, no. 2 (2019): 87–107. http://dx.doi.org/10.4018/ijertcs.2019040106.

Full text
Abstract:
Embedded systems are designed for a variety of applications ranging from Hard Real Time applications to mobile computing, which demands various types of cache designs for better performance. Since real-time applications place stringent requirements on performance, the role of the cache subsystem assumes significance. Reconfigurable caches meet performance requirements under this context. Existing reconfigurable caches tend to use associativity and size for maximizing cache performance. This article proposes a novel approach of a reconfigurable and intelligent data cache (L1) based on replaceme
APA, Harvard, Vancouver, ISO, and other styles
7

Zulfa, Mulki Indana, Ari Fadli, Adhistya Erna Permanasari, and Waleed Ali Ahmed. "Performance comparison of cache replacement algorithms onvarious internet traffic." JURNAL INFOTEL 15, no. 1 (2023): 1–7. http://dx.doi.org/10.20895/infotel.v15i1.872.

Full text
Abstract:
Internet users tend to skip and look for alternative websites if they have slow response times. For cloud network managers, implementing a caching strategy on the edge network can help lighten the workload of databases and application servers. The caching strategy is carried out by storing frequently accessed data objects in cache memory. Through this strategy, the speed of access to the same data becomes faster. Cache replacement is the main mechanism of the caching strategy. There are seven cache replacement algorithms with good performance that can be used, namely LRU, LFU, LFUDA, GDS, GDSF
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Tian, Wei Zhang, Tao Xu, and Guan Wang. "Research and Analysis of Design and Optimization of Magnetic Memory Material Cache Based on STT-MRAM." Key Engineering Materials 815 (August 2019): 28–34. http://dx.doi.org/10.4028/www.scientific.net/kem.815.28.

Full text
Abstract:
This paper proposes a cache replacement algorithm based on STT-MRAM magnetic memory, which aims to make the material system based on STT-MRAM magnetic memory better used. The algorithm replaces the data blocks in the cache by considering the position of the STT-MRAM magnetic memory head and the hardware characteristics of the STT-MRAM magnetic memory. This method will be different from the traditional magnetic memory-based common cache replacement algorithm. Traditional replacement algorithms are generally designed with only the algorithm to improve the cache, and the hardware characteristics
APA, Harvard, Vancouver, ISO, and other styles
9

Pratheeksha, P., and SA Revathi. "Machine Learning-Based Cache Replacement Policies: A Survey." International Journal of Engineering and Advanced Technology (IJEAT) 10, no. 6 (2021): 19–22. https://doi.org/10.35940/ijeat.F2907.0810621.

Full text
Abstract:
Despite extensive developments in improving cache hit rates, designing an optimal cache replacement policy that mimics Belady’s algorithm still remains a challenging task. Existing standard static replacement policies does not adapt to the dynamic nature of memory access patterns, and the diversity of computer programs only exacerbates the problem. Several factors affect the design of a replacement policy such as hardware upgrades, memory overheads, memory access patterns, model latency, etc. The amalgamation of a fundamental concept like cache replacement with advanced machine learning
APA, Harvard, Vancouver, ISO, and other styles
10

Yeung, Kai-Hau, and Kin-Yeung Wong. "An Unifying Replacement Approach for Caching Systems." Journal of Communications Software and Systems 3, no. 4 (2007): 256. http://dx.doi.org/10.24138/jcomss.v3i4.247.

Full text
Abstract:
A cache replacement algorithm called probability based replacement (PBR) is proposed in this paper. The algorithm makes replacement decision based on the byte accessprobabilities of documents. This concept can be applied to both small conventional web documents and large video documents. The performance of PBR algorithm is studied by both analysis and simulation. By comparing cache hit probability, hit rate and average time spent in three systems, it is shown that the proposed algorithm outperforms the commonly used LRU and LFU algorithms. Simulation results show that, when large video documen
APA, Harvard, Vancouver, ISO, and other styles
11

P, Pratheeksha, and Revathi S. A. "Machine Learning-Based Cache Replacement Policies: A Survey." International Journal of Engineering and Advanced Technology 10, no. 6 (2021): 19–22. http://dx.doi.org/10.35940/ijeat.f2907.0810621.

Full text
Abstract:
Despite extensive developments in improving cache hit rates, designing an optimal cache replacement policy that mimics Belady’s algorithm still remains a challenging task. Existing standard static replacement policies does not adapt to the dynamic nature of memory access patterns, and the diversity of computer programs only exacerbates the problem. Several factors affect the design of a replacement policy such as hardware upgrades, memory overheads, memory access patterns, model latency, etc. The amalgamation of a fundamental concept like cache replacement with advanced machine learning algori
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Yizhou, Yishuo Meng, Jiaxing Wang, and Chen Yang. "LSTM-CRP: Algorithm-Hardware Co-Design and Implementation of Cache Replacement Policy Using Long Short-Term Memory." Big Data and Cognitive Computing 8, no. 10 (2024): 140. http://dx.doi.org/10.3390/bdcc8100140.

Full text
Abstract:
As deep learning has produced dramatic breakthroughs in many areas, it has motivated emerging studies on the combination between neural networks and cache replacement algorithms. However, deep learning is a poor fit for performing cache replacement in hardware implementation because its neural network models are impractically large and slow. Many studies have tried to use the guidance of the Belady algorithm to speed up the prediction of cache replacement. But it is still impractical to accurately predict the characteristics of future access addresses, introducing inaccuracy in the discriminat
APA, Harvard, Vancouver, ISO, and other styles
13

Titarenko, Larysa, Vyacheslav Kharchenko, Vadym Puidenko, Artem Perepelitsyn, and Alexander Barkalov. "Hardware-Based Implementation of Algorithms for Data Replacement in Cache Memory of Processor Cores." Computers 13, no. 7 (2024): 166. http://dx.doi.org/10.3390/computers13070166.

Full text
Abstract:
Replacement policies have an important role in the functioning of the cache memory of processor cores. The implementation of a successful policy allows us to increase the performance of the processor core and the computer system as a whole. Replacement policies are most often evaluated by the percentage of cache hits during the cycles of the processor bus when accessing the cache memory. The policies that focus on replacing the Least Recently Used (LRU) or Least Frequently Used (LFU) elements, whether instructions or data, are relevant for use. It should be noted that in the paging cache buffe
APA, Harvard, Vancouver, ISO, and other styles
14

Jeong, J., and M. Dubois. "Cache replacement algorithms with nonuniform miss costs." IEEE Transactions on Computers 55, no. 4 (2006): 353–65. http://dx.doi.org/10.1109/tc.2006.50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kharbutli, M., and Yan Solihin. "Counter-Based Cache Replacement and Bypassing Algorithms." IEEE Transactions on Computers 57, no. 4 (2008): 433–47. http://dx.doi.org/10.1109/tc.2007.70816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Akbari-Bengar, Davood, Ali Ebrahimnejad, Homayun Motameni, and Mehdi Golsorkhtabaramiri. "Improving of cache memory performance based on a fuzzy clustering based page replacement algorithm by using four features." Journal of Intelligent & Fuzzy Systems 39, no. 5 (2020): 7899–908. http://dx.doi.org/10.3233/jifs-201360.

Full text
Abstract:
Internet is one of the most influential new communication technologies has influenced all aspects of human life. Extensive use of the Internet and the rapid growth of network services have increased network traffic and ultimately a slowdown in internet speeds around the world. Such traffic causes reduced network bandwidth, server response latency, and increased access time to web documents. Cache memory is used to improve CPU performance and reduce response time. Due to the cost and limited size of cache compared to other devices that store information, an alternative policy is used to select
APA, Harvard, Vancouver, ISO, and other styles
17

Haraty, Ramzi A. "Innovative Mobile E-Healthcare Systems: A New Rule-Based Cache Replacement Strategy Using Least Profit Values." Mobile Information Systems 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/6141828.

Full text
Abstract:
Providing and managing e-health data from heterogeneous and ubiquitous e-health service providers in a content distribution network (CDN) for providing e-health services is a challenging task. A content distribution network is normally utilized to cache e-health media contents such as real-time medical images and videos. Efficient management, storage, and caching of distributed e-health data in a CDN or in a cloud computing environment of mobile patients facilitate that doctors, health care professionals, and other e-health service providers have immediate access to e-health information for ef
APA, Harvard, Vancouver, ISO, and other styles
18

Al-Ahmadi, Saad. "A New Efficient Cache Replacement Strategy for Named Data Networking." International journal of Computer Networks & Communications 13, no. 5 (2021): 19–35. http://dx.doi.org/10.5121/ijcnc.2021.13502.

Full text
Abstract:
The Information-Centric Network (ICN) is a future internet architecture with efficient content retrieval and distribution. Named Data Networking (NDN) is one of the proposed architectures for ICN. NDN’s innetwork caching improves data availability, reduce retrieval delays, network load, alleviate producer load, and limit data traffic. Despite the existence of several caching decision algorithms, the fetching and distribution of contents with minimum resource utilization remains a great challenge. In this paper, we introduce a new cache replacement strategy called Enhanced Time and Frequency Ca
APA, Harvard, Vancouver, ISO, and other styles
19

Fang, Juan, Han Kong, Huijing Yang, Yixiang Xu, and Min Cai. "A Heterogeneity-Aware Replacement Policy for the Partitioned Cache on Asymmetric Multi-Core Architectures." Micromachines 13, no. 11 (2022): 2014. http://dx.doi.org/10.3390/mi13112014.

Full text
Abstract:
In an asymmetric multi-core architecture, multiple heterogeneous cores share the last-level cache (LLC). Due to the different memory access requirements among heterogeneous cores, the LLC competition is more intense. In the current work, we propose a heterogeneity-aware replacement policy for the partitioned cache (HAPC), which reduces the mutual interference between cores through cache partitioning, and tracks the shared reuse state of each cache block within the partition at runtime to guide the replacement policy to keep cache blocks shared by multiple cores in multithreaded programs. In th
APA, Harvard, Vancouver, ISO, and other styles
20

Dessokey, Maha. "Simulation-Based Evaluation of Big Data Caching Mechanisms." International Journal for Research in Applied Science and Engineering Technology 12, no. 11 (2024): 1267–72. http://dx.doi.org/10.22214/ijraset.2024.65347.

Full text
Abstract:
This paper provides a simulation-based evaluation that addresses memory management problems throughout Big Data processing. A significant problem occurs with in-memory computing when there is not enough available memory for processing the whole chunk of data, and hence some data must be selected for deletion to make room for new ones. The selected research strategy is to use different cache selection and replacement algorithms, such as Adaptive Replacement Cache (ARC) and Low Inter-Reference Recency Set (LIRS) algorithms, besides the default one, which is Least Recently Used (LRU). A simulator
APA, Harvard, Vancouver, ISO, and other styles
21

Nguyen, Xuan Truong*, and Khanh Lam Ho. "Development of Web Caching Replacement in Internet Service Based on GDSF." International Journal of Recent Technology and Engineering (IJRTE) 10, no. 6 (2022): 83–87. https://doi.org/10.35940/ijrte.F6851.0310622.

Full text
Abstract:
<strong>Abstract: </strong>This paper presents a policy to replace the cached web content named GDSF-EXT based on GDSF by adding an extensible cache located in the network device. In processing, these web contents will be retrieved instead of having to search in other devices on the same network layer or at a higher network layer. That helps to reduce the response time of the user&#39;s request. The proposed algorithm is compared to GDSF original to evaluate the performance of the network
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Jian Wei, Bao Wei Zhang, Si Liu, and Zhao Yang Li. "An Identifier-to-Locator Mapping Buffer Management Algorithm Based on Aimed Pushing and Pre-Fetching Method." Advanced Materials Research 457-458 (January 2012): 1317–25. http://dx.doi.org/10.4028/www.scientific.net/amr.457-458.1317.

Full text
Abstract:
This paper analyzed the features of a new network architecture whose locator and identifier are separated, in order to improve the query and replacement efficiency of the identifier mapping, the backup of the locator/identifier mapping information needs storing in the most necessary place, which is a problem of cache management in fact .In this paper the typical cache management algorithms are analyzed firstly, and then a new aimed pushing and pre-fetching strategy is proposed according to the bidirectional interactive character of the communication activity in querying the mapping relationshi
APA, Harvard, Vancouver, ISO, and other styles
23

Raigoza, Jaime, and Junping Sun. "Temporal Join with Hilbert Curve Mapping and Adaptive Buffer Management." International Journal of Software Innovation 2, no. 2 (2014): 1–19. http://dx.doi.org/10.4018/ijsi.2014040101.

Full text
Abstract:
Management of data with a time dimension increases the overhead of storage and query processing in large database applications especially with the join operation, which is a commonly used and expensive relational operator. The temporal join evaluation can be time consuming because temporal data are intrinsically multi-dimensional. Also, due to a limited buffer size, the long-lived data can be frequently swapped-in and swapped-out between disk and main memory thus resulting in a low cache hit ratio. The proposed index-based Hilbert-Temporal Join (Hilbert-TJ) join algorithm maps temporal data in
APA, Harvard, Vancouver, ISO, and other styles
24

Kwon, Mingoo, and Minseok Song. "A Deep Reinforcement Learning-Based Technique for Enhancing Cache Hit Rate by Adapting to Dynamic File Request Patterns." Korean Institute of Smart Media 14, no. 1 (2025): 26–34. https://doi.org/10.30693/smj.2025.14.1.26.

Full text
Abstract:
Improving the cache hit ratio in edge caching is crucial for effectively handling file requests within limited cache capacity and optimizing network and system resources. This study proposes a file cache management approach based on Deep Reinforcement Learning (DRL). The proposed method aims to enhance data access performance by efficiently utilizing limited cache resources, adapting to dynamic file request patterns, and improving the cache hit ratio. Specifically, it employs the Proximal Policy Optimization (PPO) algorithm to intelligently manage cache replacement policies and effectively han
APA, Harvard, Vancouver, ISO, and other styles
25

Sahyudi, M., and Amarudin. "Implementation of Cache Memory Technology in Improving the Performance of Modern Computing Systems." Jurnal Penelitian Pendidikan IPA 11, no. 6 (2025): 10–17. https://doi.org/10.29303/jppipa.v11i6.11545.

Full text
Abstract:
The gap between increased processor speed and access to the main memory wall is a significant obstacle in the optimization of modern computing systems, where today's applications require processing large data with real-time responses. This study aims to analyze the effectiveness of the implementation of cache memory technology in improving the performance of modern computing systems, focusing on: 1) identification of key parameters that affect the effectiveness of cache on various workloads, 2) evaluation of adaptive cache replacement algorithms, 3) analysis of performance trade-offs with ener
APA, Harvard, Vancouver, ISO, and other styles
26

Cho, Minseon, and Donghyun Kang. "ML-CLOCK: Efficient Page Cache Algorithm Based on Perceptron-Based Neural Network." Electronics 10, no. 20 (2021): 2503. http://dx.doi.org/10.3390/electronics10202503.

Full text
Abstract:
Today, research trends clearly confirm the fact that machine learning technologies open up new opportunities in various computing environments, such as Internet of Things, mobile, and enterprise. Unfortunately, the prior efforts rarely focused on designing system-level input/output stacks (e.g., page cache, file system, block input/output, and storage devices). In this paper, we propose a new page replacement algorithm, called ML-CLOCK, that embeds single-layer perceptron neural network algorithms to enable an intelligent eviction policy. In addition, ML-CLOCK employs preference rules that con
APA, Harvard, Vancouver, ISO, and other styles
27

Cho, Minseon, and Donghyun Kang. "ML-CLOCK: Efficient Page Cache Algorithm Based on Perceptron-Based Neural Network." Electronics 10, no. 20 (2021): 2503. http://dx.doi.org/10.3390/electronics10202503.

Full text
Abstract:
Today, research trends clearly confirm the fact that machine learning technologies open up new opportunities in various computing environments, such as Internet of Things, mobile, and enterprise. Unfortunately, the prior efforts rarely focused on designing system-level input/output stacks (e.g., page cache, file system, block input/output, and storage devices). In this paper, we propose a new page replacement algorithm, called ML-CLOCK, that embeds single-layer perceptron neural network algorithms to enable an intelligent eviction policy. In addition, ML-CLOCK employs preference rules that con
APA, Harvard, Vancouver, ISO, and other styles
28

Shang, Jing, Zhihui Wu, Zhiwen Xiao, Yifei Zhang, and Jibin Wang. "BERT4Cache: a bidirectional encoder representations for data prefetching in cache." PeerJ Computer Science 10 (August 29, 2024): e2258. http://dx.doi.org/10.7717/peerj-cs.2258.

Full text
Abstract:
Cache plays a crucial role in improving system response time, alleviating server pressure, and achieving load balancing in various aspects of modern information systems. The data prefetch and cache replacement algorithms are significant factors influencing caching performance. Due to the inability to learn user interests and preferences accurately, existing rule-based and data mining caching algorithms fail to capture the unique features of the user access behavior sequence, resulting in low cache hit rates. In this article, we introduce BERT4Cache, an end-to-end bidirectional Transformer mode
APA, Harvard, Vancouver, ISO, and other styles
29

Nalbant, Kemal Gökhan, Sultan Almutairi, Asma Hassan Alshehri, Hayle Kemal, Suliman A. Alsuhibany, and Bong Jun Choi. "An efficient algorithm for data transmission certainty in IIoT sensing network: A priority-based approach." PLOS ONE 19, no. 7 (2024): e0305092. http://dx.doi.org/10.1371/journal.pone.0305092.

Full text
Abstract:
This paper proposes a novel cache replacement technique based on the notion of combining periodic popularity prediction with size caching. The popularity, size, and time updates characteristics are used to calculate the value of each cache item. When it comes to content replacement, the information with the least value is first eliminated. Simulation results show that the proposed method outperforms the current algorithms in terms of cache hit rate and delay. The hit rate of the proposed scheme is 15.3% higher than GDS, 17.3% higher than MPC, 20.1% higher than LRU, 22.3% higher than FIFO, and
APA, Harvard, Vancouver, ISO, and other styles
30

M. Osman, Areej, and Niemah I. Osman. "A Comparison of Cache Replacement Algorithms for Video Services." International Journal of Computer Science and Information Technology 10, no. 2 (2018): 95–111. http://dx.doi.org/10.5121/ijcsit.2018.10208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Han, Luchao, Zhichuan Guo, and Xuewen Zeng. "Research on Multicore Key-Value Storage System for Domain Name Storage." Applied Sciences 11, no. 16 (2021): 7425. http://dx.doi.org/10.3390/app11167425.

Full text
Abstract:
This article proposes a domain name caching method for the multicore network-traffic capture system, which significantly improves insert latency, throughput and hit rate. The caching method is composed of caching replacement algorithm, cache set method. The method is easy to implement, low in deployment cost, and suitable for various multicore caching systems. Moreover, it can reduce the use of locks by changing data structures and algorithms. Experimental results show that compared with other caching system, our proposed method reaches the highest throughput under multiple cores, which indica
APA, Harvard, Vancouver, ISO, and other styles
32

Tanwir, Tanwir, Parma Hadi Rantelinggi, and Sri Widiastuti. "Peningkatan Kinerja Jaringan Dengan Menggunakan Multi-Rule Algorithm." Jurnal Teknologi Informasi dan Ilmu Komputer 8, no. 1 (2021): 69. http://dx.doi.org/10.25126/jtiik.0812676.

Full text
Abstract:
&lt;p&gt;Algoritma pergantian adalah suatu mekanisme pergantian objek dalam cache yang lama dengan objek baru, dengan mekanisme melakukan penghapusan objek sehingga mengurangi penggunaan bandwidth dan server load. Penghapusan dilakukan apabila cache penuh sehingga penyimpanan entri baru diperlukan. Secara umum algoritma FIFO, LRU dan LFU sering digunakan dalam pergantian objek, akan tetapi diperoleh suatu objek yang sering digunakan namun terhapus dalam pergantian cache sedangkan objek tersebut masih digunakan, akibatnya pada waktu klien melakukan permintaan dibutuhkan waktu yang lama dalam br
APA, Harvard, Vancouver, ISO, and other styles
33

Zulfa, Mulki Indana, Sri Maryani, Ardiansyah Ardiansyah, Triyanna Widiyaningtyas, and Waleed Ali Ali. "Application Level Caching Approach Based on Enhanced Aging Factor and Pearson Correlation Coefficient." JOIV : International Journal on Informatics Visualization 8, no. 1 (2024): 31. http://dx.doi.org/10.62527/joiv.8.1.2143.

Full text
Abstract:
Relational database management systems (RDBMS) have long served as the fundamental infrastructure for web applications. Relatively slow access speeds characterize an RDBMS because its data is stored on a disk. This RDBMS weakness can be overcome using an in-memory database (IMDB). Each query result can be stored in the IMDB to accelerate future access. However, due to the limited capacity of the server cache in the IMDB, an appropriate data priority assessment mechanism needs to be developed. This paper presents a similar cache framework that considers four data vectors, namely the data size,
APA, Harvard, Vancouver, ISO, and other styles
34

Tareef, Afaf, Khawla Al-Tarawneh, and Omar Alhuniti. "An enhanced least recently used page replacement algorithm." Indonesian Journal of Electrical Engineering and Computer Science 38, no. 1 (2025): 417. https://doi.org/10.11591/ijeecs.v38.i1.pp417-427.

Full text
Abstract:
Page replacement algorithms play a crucial role in enhancing the performance issue brought on by variations in processor speeds and memory by effectively removing pages from computer memory to improve overall efficiency. The majority of these algorithms can address the page replacement problems, but their implementation is challenging. This paper introduces a new efficient page replacement algorithm, i.e., enhanced least-replacement (E-LRU) based on two introduced features used to select the victim page. By incorporating elements of traditional algorithms such as first in first out (FIFO) and
APA, Harvard, Vancouver, ISO, and other styles
35

Afaf, Tareef Khawla Al-Tarawneh Omar Alhuniti. "An enhanced least recently used page replacement algorithm." Indonesian Journal of Electrical Engineering and Computer Science 38, no. 1 (2025): 417–27. https://doi.org/10.11591/ijeecs.v38.i1.pp417-427.

Full text
Abstract:
Page replacement algorithms play a crucial role in enhancing the performance issue brought on by variations in processor speeds and memory by effectively removing pages from computer memory to improve overall efficiency. The majority of these algorithms can address the page replacement problems, but their implementation is challenging. This paper introduces a new efficient page replacement algorithm, i.e., enhanced least-replacement (E-LRU) based on two introduced features used to select the victim page. By incorporating elements of traditional algorithms such as first in first out (FIFO) and
APA, Harvard, Vancouver, ISO, and other styles
36

Javaid, Qaisar, Ayesha Zafar, Muhammad Awais, and Munam Ali Shah. "Cache Memory: An Analysis on Replacement Algorithms and Optimization Techniques." Mehran University Research Journal of Engineering and Technology 36, no. 4 (2017): 831–40. http://dx.doi.org/10.22581/muet1982.1704.08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kusekar, Miss Shrutika. "Adaptive Wildcard Rules for TCAM Management using Cache Replacement Algorithms." International Journal for Research in Applied Science and Engineering Technology 7, no. 5 (2019): 2938–43. http://dx.doi.org/10.22214/ijraset.2019.5484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Pan, Cheng, Xiaolin Wang, Yingwei Luo, and Zhenlin Wang. "Penalty- and Locality-aware Memory Allocation in Redis Using Enhanced AET." ACM Transactions on Storage 17, no. 2 (2021): 1–45. http://dx.doi.org/10.1145/3447573.

Full text
Abstract:
Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of
APA, Harvard, Vancouver, ISO, and other styles
39

Ajaykumar, Kusekar Shrutika, and Prof H. A. Hingoliwala. "A Survey on Adaptive Wildcard Rule Cache Management with Cache Replacement Algorithms for Software - Defined Networks." IJARCCE 7, no. 10 (2018): 10–13. http://dx.doi.org/10.17148/ijarcce.2018.7103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Sheu, Jang-Ping, and Yen-Cheng Chuo. "Wildcard Rules Caching and Cache Replacement Algorithms in Software-Defined Networking." IEEE Transactions on Network and Service Management 13, no. 1 (2016): 19–29. http://dx.doi.org/10.1109/tnsm.2016.2530687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Butt, Ali R., Chris Gniady, and Y. Charlie Hu. "The performance impact of kernel prefetching on buffer cache replacement algorithms." ACM SIGMETRICS Performance Evaluation Review 33, no. 1 (2005): 157–68. http://dx.doi.org/10.1145/1071690.1064231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Butt, Ali R., Chris Gniady, and Y. Charlie Hu. "The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms." IEEE Transactions on Computers 56, no. 7 (2007): 889–908. http://dx.doi.org/10.1109/tc.2007.1029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Meddeb, Maroua, Amine Dhraief, Abdelfettah Belghith, Thierry Monteil, Khalil Drira, and Saad Al-Ahmadi. "Named Data Networking." International Journal on Semantic Web and Information Systems 14, no. 2 (2018): 86–112. http://dx.doi.org/10.4018/ijswis.2018040105.

Full text
Abstract:
This article describes how the named data networking (NDN) has recently received a lot of attention as a potential information-centric networking (ICN) architecture for the future Internet. The NDN paradigm has a great potential to efficiently address and solve the current seminal IP-based IoT architecture issues and requirements. NDN can be used with different sets of caching algorithms and caching replacement policies. The authors investigate the most suitable combination of these two features to be implemented in an IoT environment. For this purpose, the authors first reviewed the current r
APA, Harvard, Vancouver, ISO, and other styles
44

Ashraf, M. Wasim Abbas, Chuanghe Huang, Khuhawar Arif Raza, Shidong Huang, Yabo Yin, and Dong-Fang Wu. "Dynamic Cooperative Cache Management Scheme Based on Social and Popular Data in Vehicular Named Data Network." Wireless Communications and Mobile Computing 2022 (March 22, 2022): 1–11. http://dx.doi.org/10.1155/2022/8374181.

Full text
Abstract:
Vehicular Named Data Network (VNDN) is considered a strong paradigm to deploy in vehicular applications. In VNDN, each node has its cache, but due to limited cache, it directly affects the performance in a highly dynamic environment, which requires massive and fast content delivery. To reduce these issues, the cooperative caching plays an efficient role in VNDN. Most studies regarding cooperative caching focus on content replacement and caching algorithms and implement these methods in a static environment rather than a dynamic environment. In addition, few existing approaches addressed the ca
APA, Harvard, Vancouver, ISO, and other styles
45

Zhao, Xumin, Guojie Xie, Yi Luo, Jingyuan Chen, Fenghua Liu, and HongPeng Bai. "Optimizing storage on fog computing edge servers: A recent algorithm design with minimal interference." PLOS ONE 19, no. 7 (2024): e0304009. http://dx.doi.org/10.1371/journal.pone.0304009.

Full text
Abstract:
The burgeoning field of fog computing introduces a transformative computing paradigm with extensive applications across diverse sectors. At the heart of this paradigm lies the pivotal role of edge servers, which are entrusted with critical computing and storage functions. The optimization of these servers’ storage capacities emerges as a crucial factor in augmenting the efficacy of fog computing infrastructures. This paper presents a novel storage optimization algorithm, dubbed LIRU (Low Interference Recently Used), which synthesizes the strengths of the LIRS (Low Interference Recency Set) and
APA, Harvard, Vancouver, ISO, and other styles
46

Liu, Yazhi, Pengfei Zhong, Zhigang Yang, Wei Li, and Siwei Li. "Computation Offloading Based on a Distributed Overlay Network Cache-Sharing Mechanism in Multi-Access Edge Computing." Future Internet 16, no. 4 (2024): 136. http://dx.doi.org/10.3390/fi16040136.

Full text
Abstract:
Multi-access edge computing (MEC) enhances service quality for users and reduces computational overhead by migrating workloads and application data to the network edge. However, current solutions for task offloading and cache replacement in edge scenarios are constrained by factors such as communication bandwidth, wireless network coverage, and limited storage capacity of edge devices, making it challenging to achieve high cache reuse and lower system energy consumption. To address these issues, a framework leveraging cooperative edge servers deployed in wireless access networks across differe
APA, Harvard, Vancouver, ISO, and other styles
47

Gast, Nicolas, and Benny Van Houdt. "TTL approximations of the cache replacement algorithms LRU(m) and h-LRU." Performance Evaluation 117 (December 2017): 33–57. http://dx.doi.org/10.1016/j.peva.2017.09.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Yovita, Leanna Vidya, Tody Ariefianto Wibowo, Ade Aditya Ramadha, Gregorius Pradana Satriawan, and Sevierda Raniprima. "Performance Analysis of Cache Replacement Algorithm using Virtual Named Data Network Nodes." Jurnal Online Informatika 7, no. 2 (2022): 203–10. http://dx.doi.org/10.15575/join.v7i2.875.

Full text
Abstract:
As a future internet candidate, named Data Network (NDN) provides more efficient communication than TCP/IP network. Unlike TCP/IP, consumer requests in NDN are sent based on content, not the address. The previous study evaluated the NDN performance using a simulator. In this research, we modeled the system using virtual NDN nodes, making the model more relevant to the real NDN. As an essential component in every NDN router, the content store (CS) has a function to keep the data. We use First In First Out (FIFO) and Least Recetly Used (LRU) in our nodes as cache replacement algorithms. The in-d
APA, Harvard, Vancouver, ISO, and other styles
49

Gast, Nicolas, and Benny Van Houdt. "Transient and Steady-state Regime of a Family of List-based Cache Replacement Algorithms." ACM SIGMETRICS Performance Evaluation Review 43, no. 1 (2015): 123–36. http://dx.doi.org/10.1145/2796314.2745850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Gast, Nicolas, and Benny Van Houdt. "Transient and steady-state regime of a family of list-based cache replacement algorithms." Queueing Systems 83, no. 3-4 (2016): 293–328. http://dx.doi.org/10.1007/s11134-016-9487-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!