To see the other types of publications on this topic, follow the link: Cache replacement algorithms.

Journal articles on the topic 'Cache replacement algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Cache replacement algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

WANG, JAMES Z., and VIPUL BHULAWALA. "DESIGN AND IMPLEMENTATION OF A P2P COOPERATIVE PROXY CACHE SYSTEM." Journal of Interconnection Networks 08, no. 02 (June 2007): 147–62. http://dx.doi.org/10.1142/s0219265907001953.

Full text
Abstract:
In this paper, we design and implement a P2P cooperative proxy caching system based on a novel P2P cooperative proxy caching scheme. To effectively locate the cached web documents, a TTL-based routing protocol is proposed to manage the query and response messages in the P2P cooperative proxy cache system. Furthermore, we design a predict query-route algorithm to improve the TTL-based routing protocol by adding extra information in the query message packets. To select a suitable cache replacement algorithm for the P2P cooperative proxy cache system, three different cache replacement algorithms, LRU, LFU and SIZE, are evaluated using web trace based performance studies on the implemented P2P cooperative proxy cache system. The experimental results show that LRU is an overall better cache replacement algorithm for the P2P proxy cache system although SIZE based cache replacement approach produces slightly better cache hit ratio when cache size is very small. The performance studies also demonstrate that the proposed message routing protocols significantly improve the performance of the P2P cooperative proxy cache system, in terms of cache hit ratio, byte hit ratio, user request latency, and the number of query messages generated in the proxy cache system, compared to the flooding based message routing protocol.
APA, Harvard, Vancouver, ISO, and other styles
2

Prihozhy, A. A. "Simulation of direct mapped, k-way and fully associative cache on all pairs shortest paths algorithms." «System analysis and applied information science», no. 4 (December 30, 2019): 10–18. http://dx.doi.org/10.21122/2309-4923-2019-4-10-18.

Full text
Abstract:
Caches are intermediate level between fast CPU and slow main memory. It aims to store copies of frequently used data and to reduce the access time to the main memory. Caches are capable of exploiting temporal and spatial localities during program execution. When the processor accesses memory, the cache behavior depends on if the data is in cache: a cache hit occurs if it is, and, a cache miss occurs, otherwise. In the last case, the cache may have to evict other data. The misses produce processor stalls and slow down the computations. The replacement policy chooses a data to evict, trying to predict the future accesses to memory. The hit and miss rate depends on the cache type: direct mapped, set associative and fully associative cache. The least recently used replacement policy serves the sets. The miss rate strongly depends on the executed algorithm. The all pairs shortest paths algorithms solve many practical problems, and it is important to know what algorithm and what cache type match best. This paper presents a technique of simulating the direct mapped, k-way associative and fully associative cache during the algorithm execution, to measure the frequency of read data to cache and write data to memory operations. We have measured the frequencies versus the cache size, the data block size, the amount of processed data, the type of cache, and the type of algorithm. After comparing the basic and blocked Floyd-Warshall algorithms, we conclude that the blocked algorithm well localizes data accesses within one block, but it does not localize data dependencies among blocks. The direct mapped cache significantly loses the associative cache; we can improve its performance by appropriate mapping virtual addresses to physical locations.
APA, Harvard, Vancouver, ISO, and other styles
3

Vishnekov, A. V., and E. M. Ivanova. "DYNAMIC CONTROL METHODS OF CACHE LINES REPLACEMENT POLICY." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 191 (May 2020): 49–56. http://dx.doi.org/10.14489/vkit.2020.05.pp.049-056.

Full text
Abstract:
The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequential, locally-point, mixed. We suggest the procedure for selecting an effective replacement algorithm by support decision-making methods based on the current statistics of caching parameters. The paper gives the analysis of existing cache replacement algorithms. We propose a decision-making procedure for selecting an effective cache replacement algorithm based on the methods of ranking alternatives, preferences and hierarchy analysis. The critical number of cache hits, the average time of data query execution, the average cache latency are selected as indicators of initiation for the swapping procedure for the current replacement algorithm. The main advantage of the proposed approach is its universality. This approach assumes an adaptive decision-making procedure for the effective replacement algorithm selecting. The procedure allows the criteria variability for evaluating the replacement algorithms, its’ efficiency, and their preference for different types of program code. The dynamic swapping of the replacement algorithm with a more efficient one during the program execution improves the performance of the computer system.
APA, Harvard, Vancouver, ISO, and other styles
4

Vishnekov, A. V., and E. M. Ivanova. "DYNAMIC CONTROL METHODS OF CACHE LINES REPLACEMENT POLICY." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 191 (May 2020): 49–56. http://dx.doi.org/10.14489/vkit.2020.05.pp.049-056.

Full text
Abstract:
The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequential, locally-point, mixed. We suggest the procedure for selecting an effective replacement algorithm by support decision-making methods based on the current statistics of caching parameters. The paper gives the analysis of existing cache replacement algorithms. We propose a decision-making procedure for selecting an effective cache replacement algorithm based on the methods of ranking alternatives, preferences and hierarchy analysis. The critical number of cache hits, the average time of data query execution, the average cache latency are selected as indicators of initiation for the swapping procedure for the current replacement algorithm. The main advantage of the proposed approach is its universality. This approach assumes an adaptive decision-making procedure for the effective replacement algorithm selecting. The procedure allows the criteria variability for evaluating the replacement algorithms, its’ efficiency, and their preference for different types of program code. The dynamic swapping of the replacement algorithm with a more efficient one during the program execution improves the performance of the computer system.
APA, Harvard, Vancouver, ISO, and other styles
5

Begum, B. Shameedha, and N. Ramasubramanian. "Design of an Intelligent Data Cache with Replacement Policy." International Journal of Embedded and Real-Time Communication Systems 10, no. 2 (April 2019): 87–107. http://dx.doi.org/10.4018/ijertcs.2019040106.

Full text
Abstract:
Embedded systems are designed for a variety of applications ranging from Hard Real Time applications to mobile computing, which demands various types of cache designs for better performance. Since real-time applications place stringent requirements on performance, the role of the cache subsystem assumes significance. Reconfigurable caches meet performance requirements under this context. Existing reconfigurable caches tend to use associativity and size for maximizing cache performance. This article proposes a novel approach of a reconfigurable and intelligent data cache (L1) based on replacement algorithms. An intelligent embedded data cache and a dynamic reconfigurable intelligent embedded data cache have been implemented using Verilog 2001 and tested for cache performance. Data collected by enabling the cache with two different replacement strategies have shown that the hit rate improves by 40% when compared to LRU and 21% when compared to MRU for sequential applications which will significantly improve performance of embedded real time application.
APA, Harvard, Vancouver, ISO, and other styles
6

Yeung, Kai-Hau, and Kin-Yeung Wong. "An Unifying Replacement Approach for Caching Systems." Journal of Communications Software and Systems 3, no. 4 (December 20, 2007): 256. http://dx.doi.org/10.24138/jcomss.v3i4.247.

Full text
Abstract:
A cache replacement algorithm called probability based replacement (PBR) is proposed in this paper. The algorithm makes replacement decision based on the byte accessprobabilities of documents. This concept can be applied to both small conventional web documents and large video documents. The performance of PBR algorithm is studied by both analysis and simulation. By comparing cache hit probability, hit rate and average time spent in three systems, it is shown that the proposed algorithm outperforms the commonly used LRU and LFU algorithms. Simulation results show that, when large video documents are considered, the PBR algorithm provides up to 120% improvement in cache hit rate when comparing to that ofconventional algorithms. The uniqueness of this work is that, unlike previous studies that propose different solutions for different types of documents separately, the proposed PBR algorithm provides a simple and unified approach to serve different types of documents in a single system.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Tian, Wei Zhang, Tao Xu, and Guan Wang. "Research and Analysis of Design and Optimization of Magnetic Memory Material Cache Based on STT-MRAM." Key Engineering Materials 815 (August 2019): 28–34. http://dx.doi.org/10.4028/www.scientific.net/kem.815.28.

Full text
Abstract:
This paper proposes a cache replacement algorithm based on STT-MRAM magnetic memory, which aims to make the material system based on STT-MRAM magnetic memory better used. The algorithm replaces the data blocks in the cache by considering the position of the STT-MRAM magnetic memory head and the hardware characteristics of the STT-MRAM magnetic memory. This method will be different from the traditional magnetic memory-based common cache replacement algorithm. Traditional replacement algorithms are generally designed with only the algorithm to improve the cache, and the hardware characteristics of the storage device are ignored. This method can improve the material characteristics of the STT-MRAM magnetic memory by improving the cache life and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
8

P, Pratheeksha, and Revathi S. A. "Machine Learning-Based Cache Replacement Policies: A Survey." International Journal of Engineering and Advanced Technology 10, no. 6 (August 30, 2021): 19–22. http://dx.doi.org/10.35940/ijeat.f2907.0810621.

Full text
Abstract:
Despite extensive developments in improving cache hit rates, designing an optimal cache replacement policy that mimics Belady’s algorithm still remains a challenging task. Existing standard static replacement policies does not adapt to the dynamic nature of memory access patterns, and the diversity of computer programs only exacerbates the problem. Several factors affect the design of a replacement policy such as hardware upgrades, memory overheads, memory access patterns, model latency, etc. The amalgamation of a fundamental concept like cache replacement with advanced machine learning algorithms provides surprising results and drives the development towards cost-effective solutions. In this paper, we review some of the machine-learning based cache replacement policies that outperformed the static heuristics.
APA, Harvard, Vancouver, ISO, and other styles
9

Jeong, J., and M. Dubois. "Cache replacement algorithms with nonuniform miss costs." IEEE Transactions on Computers 55, no. 4 (April 2006): 353–65. http://dx.doi.org/10.1109/tc.2006.50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kharbutli, M., and Yan Solihin. "Counter-Based Cache Replacement and Bypassing Algorithms." IEEE Transactions on Computers 57, no. 4 (April 2008): 433–47. http://dx.doi.org/10.1109/tc.2007.70816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Akbari-Bengar, Davood, Ali Ebrahimnejad, Homayun Motameni, and Mehdi Golsorkhtabaramiri. "Improving of cache memory performance based on a fuzzy clustering based page replacement algorithm by using four features." Journal of Intelligent & Fuzzy Systems 39, no. 5 (November 19, 2020): 7899–908. http://dx.doi.org/10.3233/jifs-201360.

Full text
Abstract:
Internet is one of the most influential new communication technologies has influenced all aspects of human life. Extensive use of the Internet and the rapid growth of network services have increased network traffic and ultimately a slowdown in internet speeds around the world. Such traffic causes reduced network bandwidth, server response latency, and increased access time to web documents. Cache memory is used to improve CPU performance and reduce response time. Due to the cost and limited size of cache compared to other devices that store information, an alternative policy is used to select and extract a page to make space for new pages when the cache is filled. Many algorithms have been introduced which performance depends on a high-speed web cache, but it is not well optimized. The general feature of most of them is that they are developed from the famous LRU and LFU designs and take advantage of both designs. In this research, a page replacement algorithm called FCPRA (Fuzzy Clustering based Page Replacement Algorithm) is presented, which is based on four features. When the cache space can’t respond to a request for a new page, it selects a page of the lowest priority cluster and the largest login order; then, removes it from the cache memory. The results show that FCPRA has a better hit rate with different data sets and can improve the cache memory performance compared to other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
12

Haraty, Ramzi A. "Innovative Mobile E-Healthcare Systems: A New Rule-Based Cache Replacement Strategy Using Least Profit Values." Mobile Information Systems 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/6141828.

Full text
Abstract:
Providing and managing e-health data from heterogeneous and ubiquitous e-health service providers in a content distribution network (CDN) for providing e-health services is a challenging task. A content distribution network is normally utilized to cache e-health media contents such as real-time medical images and videos. Efficient management, storage, and caching of distributed e-health data in a CDN or in a cloud computing environment of mobile patients facilitate that doctors, health care professionals, and other e-health service providers have immediate access to e-health information for efficient decision making as well as better treatment. Caching is one of the key methods in distributed computing environments to improve the performance of data retrieval. To find which item in the cache can be evicted and replaced, cache replacement algorithms are used. Many caching approaches are proposed, but the SACCS—Scalable Asynchronous Cache Consistency Scheme—has proved to be more scalable than the others. In this work, we propose a new cache replacement algorithm—Profit SACCS—that is based on the rule-based least profit value. It replaces the least recently used strategy that SACCS uses. A comparison with different cache replacement strategies is also presented.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Jian Wei, Bao Wei Zhang, Si Liu, and Zhao Yang Li. "An Identifier-to-Locator Mapping Buffer Management Algorithm Based on Aimed Pushing and Pre-Fetching Method." Advanced Materials Research 457-458 (January 2012): 1317–25. http://dx.doi.org/10.4028/www.scientific.net/amr.457-458.1317.

Full text
Abstract:
This paper analyzed the features of a new network architecture whose locator and identifier are separated, in order to improve the query and replacement efficiency of the identifier mapping, the backup of the locator/identifier mapping information needs storing in the most necessary place, which is a problem of cache management in fact .In this paper the typical cache management algorithms are analyzed firstly, and then a new aimed pushing and pre-fetching strategy is proposed according to the bidirectional interactive character of the communication activity in querying the mapping relationship, thus divided the Access Switch Router(ASR) cachespace into three section: Waiting_First_Access_Section (WFA), Frequently_Used_se- ction(FU) and Session_Duration_Section (SDU). And based on the identity mapping information reuse probability predicted by the Markov model, an information lifetime-based adaptive cache management algorithm is proposed. The simulation result shows that the performance of the buffer management algorithm proposed by this paper is better than those of the existing cache management algorithms such as LFU, LRU and LFU-LRU
APA, Harvard, Vancouver, ISO, and other styles
14

M. Osman, Areej, and Niemah I. Osman. "A Comparison of Cache Replacement Algorithms for Video Services." International Journal of Computer Science and Information Technology 10, no. 2 (April 30, 2018): 95–111. http://dx.doi.org/10.5121/ijcsit.2018.10208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Raigoza, Jaime, and Junping Sun. "Temporal Join with Hilbert Curve Mapping and Adaptive Buffer Management." International Journal of Software Innovation 2, no. 2 (April 2014): 1–19. http://dx.doi.org/10.4018/ijsi.2014040101.

Full text
Abstract:
Management of data with a time dimension increases the overhead of storage and query processing in large database applications especially with the join operation, which is a commonly used and expensive relational operator. The temporal join evaluation can be time consuming because temporal data are intrinsically multi-dimensional. Also, due to a limited buffer size, the long-lived data can be frequently swapped-in and swapped-out between disk and main memory thus resulting in a low cache hit ratio. The proposed index-based Hilbert-Temporal Join (Hilbert-TJ) join algorithm maps temporal data into Hilbert curve space that is inherently clustered, thus allowing for fast retrieval and storage. This paper also proposes the Adaptive Replacement Cache-Temporal Data (ARC-TD) buffer replacement policy which favors the cache retention of data pages in proportion to the average life span of the tuples in the buffer. By giving preference to tuples having long life spans, a higher cache hit ratio can be achieved. The caching priority is also balanced between recently and frequently accessed data. The comparison study consists of different join algorithms and buffer replacement policies. Additionally, the Hilbert-TJ algorithm offers support to both valid-time and transaction-time data.
APA, Harvard, Vancouver, ISO, and other styles
16

Kusekar, Miss Shrutika. "Adaptive Wildcard Rules for TCAM Management using Cache Replacement Algorithms." International Journal for Research in Applied Science and Engineering Technology 7, no. 5 (May 31, 2019): 2938–43. http://dx.doi.org/10.22214/ijraset.2019.5484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Javaid, Qaisar, Ayesha Zafar, Muhammad Awais, and Munam Ali Shah. "Cache Memory: An Analysis on Replacement Algorithms and Optimization Techniques." Mehran University Research Journal of Engineering and Technology 36, no. 4 (October 1, 2017): 831–40. http://dx.doi.org/10.22581/muet1982.1704.08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Han, Luchao, Zhichuan Guo, and Xuewen Zeng. "Research on Multicore Key-Value Storage System for Domain Name Storage." Applied Sciences 11, no. 16 (August 12, 2021): 7425. http://dx.doi.org/10.3390/app11167425.

Full text
Abstract:
This article proposes a domain name caching method for the multicore network-traffic capture system, which significantly improves insert latency, throughput and hit rate. The caching method is composed of caching replacement algorithm, cache set method. The method is easy to implement, low in deployment cost, and suitable for various multicore caching systems. Moreover, it can reduce the use of locks by changing data structures and algorithms. Experimental results show that compared with other caching system, our proposed method reaches the highest throughput under multiple cores, which indicates that the cache method we proposed is best suited for domain name caching.
APA, Harvard, Vancouver, ISO, and other styles
19

Tanwir, Tanwir, Parma Hadi Rantelinggi, and Sri Widiastuti. "Peningkatan Kinerja Jaringan Dengan Menggunakan Multi-Rule Algorithm." Jurnal Teknologi Informasi dan Ilmu Komputer 8, no. 1 (February 4, 2021): 69. http://dx.doi.org/10.25126/jtiik.0812676.

Full text
Abstract:
<p>Algoritma pergantian adalah suatu mekanisme pergantian objek dalam cache yang lama dengan objek baru, dengan mekanisme melakukan penghapusan objek sehingga mengurangi penggunaan bandwidth dan server load. Penghapusan dilakukan apabila cache penuh sehingga penyimpanan entri baru diperlukan. Secara umum algoritma FIFO, LRU dan LFU sering digunakan dalam pergantian objek, akan tetapi diperoleh suatu objek yang sering digunakan namun terhapus dalam pergantian cache sedangkan objek tersebut masih digunakan, akibatnya pada waktu klien melakukan permintaan dibutuhkan waktu yang lama dalam browsing objek. Untuk mengatasi masalah tersebut dilakukan kombinasi algoritma pergantian cache Multi-Rule Algorithm, dalam bentuk algoritma kombinasi ganda FIFO-LRU dan triple FIFO-LRU-LFU. Algoritma Mural (Multi-Rule Algorithm) menghasilkan respon pada cache size 200 MB dengan waktu tanggapan rata-rata berturut-turut 56,33 dan 42 ms, sedangkan pada algoritma tunggal memerlukan waktu tanggapan rata-rata 77 ms. Sehingga Multi-Rule Algorithm dapat meningkatkan kinerja terhadap waktu penundaan, throughput, dan hit rate. Dengan demikian, algoritma pergantian cache Mural, sangat direkomendasikan untuk meningkatkan akses klien.</p><p> </p><p class="Judul2"><em>Abstract</em></p><p class="Abstract">Substitution algorithm is a mechanism to replace objects in the old cache with new objects, with a mechanism to delete objects so that it reduces bandwidth usage and server load. Deletion is done when the cache is full so saving new entries is needed. In general, FIFO, LRU and LFU algorithms are often used in object changes, but an object that is often used but is deleted in the cache changes while the object is still being used, consequently when the client makes a request it takes a long time to browse the object. To overcome this problem a combination of Multi-Rule Algorithm cache replacement algorithms is performed, in the form of a double combination algorithm FIFO-LRU and triple FIFO-LRU-LFU. The Mural algorithm (Multi-Rule Algorithm) produces a response on a cache size of 200 MB with an average response time of 56.33 and 42 ms respectively, whereas a single algorithm requires an average response time of 77 ms. So the Multi-Rule Algorithm can improve the performance of the delay, throughput, and hit rate. Thus, the Mural cache change algorithm, is highly recommended to improve client access.</p><p><br /><em></em></p>
APA, Harvard, Vancouver, ISO, and other styles
20

Ajaykumar, Kusekar Shrutika, and Prof H. A. Hingoliwala. "A Survey on Adaptive Wildcard Rule Cache Management with Cache Replacement Algorithms for Software - Defined Networks." IJARCCE 7, no. 10 (October 30, 2018): 10–13. http://dx.doi.org/10.17148/ijarcce.2018.7103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Butt, Ali R., Chris Gniady, and Y. Charlie Hu. "The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms." IEEE Transactions on Computers 56, no. 7 (July 2007): 889–908. http://dx.doi.org/10.1109/tc.2007.1029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Butt, Ali R., Chris Gniady, and Y. Charlie Hu. "The performance impact of kernel prefetching on buffer cache replacement algorithms." ACM SIGMETRICS Performance Evaluation Review 33, no. 1 (June 6, 2005): 157–68. http://dx.doi.org/10.1145/1071690.1064231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sheu, Jang-Ping, and Yen-Cheng Chuo. "Wildcard Rules Caching and Cache Replacement Algorithms in Software-Defined Networking." IEEE Transactions on Network and Service Management 13, no. 1 (March 2016): 19–29. http://dx.doi.org/10.1109/tnsm.2016.2530687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gast, Nicolas, and Benny Van Houdt. "TTL approximations of the cache replacement algorithms LRU(m) and h-LRU." Performance Evaluation 117 (December 2017): 33–57. http://dx.doi.org/10.1016/j.peva.2017.09.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Pan, Cheng, Xiaolin Wang, Yingwei Luo, and Zhenlin Wang. "Penalty- and Locality-aware Memory Allocation in Redis Using Enhanced AET." ACM Transactions on Storage 17, no. 2 (May 28, 2021): 1–45. http://dx.doi.org/10.1145/3447573.

Full text
Abstract:
Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.
APA, Harvard, Vancouver, ISO, and other styles
26

Meddeb, Maroua, Amine Dhraief, Abdelfettah Belghith, Thierry Monteil, Khalil Drira, and Saad Al-Ahmadi. "Named Data Networking." International Journal on Semantic Web and Information Systems 14, no. 2 (April 2018): 86–112. http://dx.doi.org/10.4018/ijswis.2018040105.

Full text
Abstract:
This article describes how the named data networking (NDN) has recently received a lot of attention as a potential information-centric networking (ICN) architecture for the future Internet. The NDN paradigm has a great potential to efficiently address and solve the current seminal IP-based IoT architecture issues and requirements. NDN can be used with different sets of caching algorithms and caching replacement policies. The authors investigate the most suitable combination of these two features to be implemented in an IoT environment. For this purpose, the authors first reviewed the current research and development progress in ICN, then they conduct a qualitative comparative study of the relevant ICN proposals and discuss the suitability of the NDN as a promising architecture for IoT. Finally, they evaluate the performance of NDN in an IoT environment with different caching algorithms and replacement policies. The obtained results show that the consumer-cache caching algorithm used with the Random Replacement (RR) policy significantly improve NDN content validity in an IoT environment.
APA, Harvard, Vancouver, ISO, and other styles
27

Gast, Nicolas, and Benny Van Houdt. "Transient and Steady-state Regime of a Family of List-based Cache Replacement Algorithms." ACM SIGMETRICS Performance Evaluation Review 43, no. 1 (June 24, 2015): 123–36. http://dx.doi.org/10.1145/2796314.2745850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Gast, Nicolas, and Benny Van Houdt. "Transient and steady-state regime of a family of list-based cache replacement algorithms." Queueing Systems 83, no. 3-4 (June 15, 2016): 293–328. http://dx.doi.org/10.1007/s11134-016-9487-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Yinyin, Yuwang Yang, and Qingguang Wang. "An efficient Intelligent Cache Replacement Policy Suitable for PACS." International Journal of Machine Learning and Computing 11, no. 3 (May 2021): 250–55. http://dx.doi.org/10.18178/ijmlc.2021.11.3.1043.

Full text
Abstract:
An efficient intelligent cache replacement policy suitable for picture archiving and communication systems (PACS) was proposed in this work. By combining the Support vector machine (SVM) with the classic least recently used (LRU) cache replacement policy, we have created a new intelligent cache replacement policy called SVM-LRU. The SVM-LRU policy is unlike conventional cache replacement policies, which are solely dependent on the intrinsic properties of the cached items. Our PACS-oriented SVM-LRU algorithm identifies the variables that affect file access probabilities by mining medical data. The SVM algorithm is then used to model the future access probabilities of the cached items, thus improving cache performance. Finally, a simulation experiment was performed using the trace-driven simulation method. It was shown that the SVM-LRU cache algorithm significantly improves PACS cache performance when compared to conventional cache replacement policies like LRU, LFU, SIZE and GDS.
APA, Harvard, Vancouver, ISO, and other styles
30

Holmes, G., B. Pfahringer, and R. Kirkby. "CACHE HIERARCHY INSPIRED COMPRESSION: A NOVEL ARCHITECTURE FOR DATA STREAMS." Journal of IT in Asia 2, no. 1 (April 26, 2016): 39–52. http://dx.doi.org/10.33736/jita.54.2007.

Full text
Abstract:
We present an architecture for data streams based on structures typically found in web cache hierarchies. The main idea is to build a meta level analyser from a number of levels constructed over time from a data stream. We present the general architecture for such a system and an application to classification. This architecture is an instance of the general wrapper idea allowing us to reuse standard batch learning algorithms in an inherently incremental learning environment. By artificially generating data sources we demonstrate that a hierarchy containing a mixture of models is able to adapt over time to the source of the data. In these experiments the hierarchies use an elementary performance based replacement policy and unweighted voting for making classification decisions.
APA, Harvard, Vancouver, ISO, and other styles
31

Sathiyamoorthi and Murali Bhaskaran. "Novel Approaches for Integrating MART1 Clustering Based Pre-Fetching Technique with Web Caching." International Journal of Information Technology and Web Engineering 8, no. 2 (April 2013): 18–32. http://dx.doi.org/10.4018/jitwe.2013040102.

Full text
Abstract:
Web caching and Web pre-fetching are two important techniques for improving the performance of Web based information retrieval system. These two techniques would complement each other, since Web caching provides temporal locality whereas Web pre-fetching provides spatial locality of Web objects. However, if the web caching and pre-fetching are integrated inefficiently, this might cause increasing the network traffic as well as the Web server load. Conventional policies are most suitable only for memory caching since it involves fixed page size. But when one deals with web caching which involves pages of different size. Hence one need an efficient algorithm that works better in web cache environment. Moreover conventional replacement policies are not suitable in clustering based pre-fetching environment since multiple objects were pre-fetched. Hence, it cannot be handled by conventional algorithms. Therefore, care must be taken while integrating web caching with web pre-fetching technique in order to overcome these limitations. In this paper, novel algorithms have been proposed for integrating web caching with clustering based pre-fetching technique. Here Modified ART1 has been used for clustering based pre-fetching technique. The proposed algorithm outperforms the traditional algorithms in terms of hit rate and number of objects to be pre-fetched. Hence saves bandwidth.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Guo Yin, Bin Tang, Xiang Hui Wang, and Yan Xia Wu. "Neighbor-Referencing Cooperative Cache Policy in Content-Centric Network." Applied Mechanics and Materials 433-435 (October 2013): 1702–8. http://dx.doi.org/10.4028/www.scientific.net/amm.433-435.1702.

Full text
Abstract:
In-network caching is one of the key aspects of content-centric networks (CCN), while the cache replacement algorithm of LRU does not consider the relation between the cache contents and its neighbor nodes in the cache replacement process, which reduced the efficiency of the cache. In this paper, a Neighbor-Referencing Cooperative Cache Policy (NRCCP) in CCN has been proposed to check whether the neighbors have cached the content. Node will cache the content while none of its neighbors has cached it, therefore reduce redundancy of cached content and increase the variety of contents. Simulation results show that NRCCP has better performance, as the network path had more caching ability and more content popularity densely distributed.
APA, Harvard, Vancouver, ISO, and other styles
33

Korla, Swaroopa, and Shanti Chilukuri. "T-Move: A Light-Weight Protocol for Improved QoS in Content-Centric Networks with Producer Mobility." Future Internet 11, no. 2 (January 27, 2019): 28. http://dx.doi.org/10.3390/fi11020028.

Full text
Abstract:
Recent interest in applications where content is of primary interest has triggered the exploration of a variety of protocols and algorithms. For such networks that are information-centric, architectures such as the Content-Centric Networking have been proven to result in good network performance. However, such architectures are still evolving to cater for application-specific requirements. This paper proposes T-Move, a light-weight solution for producer mobility and caching at the edge that is especially suitable for content-centric networks with mobile content producers. T-Move introduces a novel concept called trendiness of data for Content-Centric Networking (CCN)/Named Data Networking (NDN)-based networks. It enhances network performance and quality of service (QoS) using two strategies—cache replacement and proactive content-pushing for handling producer mobility—both based on trendiness. It uses simple operations and smaller control message overhead and is suitable for networks where the response needs to be quick. Simulation results using ndnSIM show reduced traffic, content retrieval time, and increased cache hit ratio with T-Move, when compared to MAP-Me and plain NDN for networks of different sizes and mobility rates.
APA, Harvard, Vancouver, ISO, and other styles
34

Do, Cong Thuan, Hong-Jun Choi, Jong Myon Kim, and Cheol Hong Kim. "A new cache replacement algorithm for last-level caches by exploiting tag-distance correlation of cache lines." Microprocessors and Microsystems 39, no. 4-5 (June 2015): 286–95. http://dx.doi.org/10.1016/j.micpro.2015.05.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wijaya, Marvin Chandra. "Distributed proxy cache replacement algorithm to improve web server performance." Jurnal Teknologi dan Sistem Komputer 8, no. 1 (August 7, 2019): 1–5. http://dx.doi.org/10.14710/jtsiskom.8.1.2020.1-5.

Full text
Abstract:
The performance of web processing needs to increase to meet the growth of internet usage, one of which is by using cache on the web proxy server. This study examines the implementation of the proxy cache replacement algorithm to increase cache hits in the proxy server. The study was conducted by creating a clustered or distributed web server system using eight web server nodes. The system was able to provide increased latency by 90 % better and increased throughput of 5.33 times better.
APA, Harvard, Vancouver, ISO, and other styles
36

LIU, Lei, and Xiaopeng XIONG. "Least cache value replacement algorithm." Journal of Computer Applications 33, no. 4 (October 11, 2013): 1018–22. http://dx.doi.org/10.3724/sp.j.1087.2013.01018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tang, Bin, Guo Yin Zhang, Zhi Jing Xing, Yan Xia Wu, and Xiang Hui Wang. "An Advanced LRU Cache Replacement Strategy for Content-Centric Network." Applied Mechanics and Materials 462-463 (November 2013): 884–90. http://dx.doi.org/10.4028/www.scientific.net/amm.462-463.884.

Full text
Abstract:
In-network caching is one of the key aspects of content-centric networks (CCN), while the cache replacement algorithm of LRU does not consider the relation between the contents of the cache and its neighbor nodes in the cache replacement process, which bring worthless cache block in the cache and reduce the efficiency of the cache. An enhanced LRU cache replacement strategy has been proposed, which can replace the cache block in time that is not requested from other nodes and improve the rate of effective utilization of the cache space. Simulation results show that the A-LRU strategy increases cache hit rate, shortens the data request delay and improves overall network performance, verifies the validity of the A-LRU strategies in CCN.
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Yin. "A Media Sensitive Cache Replacement Algorithm." IEIT Journal of Adaptive and Dynamic Computing 2011, no. 1 (2011): 7. http://dx.doi.org/10.5813/www.ieit-web.org/ijadc/2011.1.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hossain, Ashfaq, Anaikuppam R. Marudarajan, and Mahmoud A. Manzoul. "FUZZY REPLACEMENT ALGORITHM FOR CACHE MEMORY." Cybernetics and Systems 22, no. 6 (November 1991): 733–46. http://dx.doi.org/10.1080/01969729108902309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Пуйденко, Вадим Олексійович, and Вячеслав Сергійович Харченко. "МІНІМІЗАЦІЯ ЛОГІЧНОЇ СХЕМИ ДЛЯ РЕАЛІЗАЦІЇ PSEUDO LRU ШЛЯХОМ МІЖТИПОВОГО ПЕРЕХОДУ У ТРИГЕРНИХ СТРУКТУРАХ." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 2 (April 26, 2020): 33–47. http://dx.doi.org/10.32620/reks.2020.2.03.

Full text
Abstract:
The principle of program control means that the processor core turns to the main memory of the computer for operands or instructions. According to architectural features, operands are stored in data segments, and instructions are stored in code segments of the main memory. The operating system uses both page memory organization and segment memory organization. The page memory organization is always mapped to the segment organization. Due to the cached packet cycles of the processor core, copies of the main memory pages are stored in the internal associative cache memory. The associative cache memory consists of three units: a data unit, a tag unit, and an LRU unit. The data unit stores operands or instructions, the tag unit contains fragments of address information, and the LRU unit contains the logic of policy for replacement of string. The missing event attracts LRU logic to decide for substitution of reliable string in the data unit of associative cache memory. The pseudo-LRU algorithm is a simple and better substitution policy among known substitution policies. Two options for the minimization of the hardware for replacement policy by the pseudo-LRU algorithm in q - directed associative cache memory is implemented. The transition from the trigger structure of the synchronous D-trigger to the trigger structure of the synchronous JK-trigger is carried out reasonably in both options. The first option of minimization is based on the sequence for updating of the by the algorithm pseudo LRU, which allows deleting of the combinational logic for updating bits of LRU unit. The second option of minimization is based on the sequence for changing of the q - index of direction, as the consequence for updating the bits of LRU unit by the algorithm pseudo LRU. It allows additionally reducing the number of memory elements. Both options of the minimization allow improving such characteristics as productivity and reliability of the LRU unit.
APA, Harvard, Vancouver, ISO, and other styles
41

SEDANO, ENRIQUE, SILVIO SEPULVEDA, FERNANDO CASTRO, DANIEL CHAVER, RODRIGO GONZALEZ-ALBERQUILLA, and FRANCISCO TIRADO. "IMPROVING peLIFO CACHE REPLACEMENT POLICY: HARDWARE REDUCTION AND THREAD-AWARE EXTENSION." Journal of Circuits, Systems and Computers 23, no. 04 (April 2014): 1450046. http://dx.doi.org/10.1142/s0218126614500467.

Full text
Abstract:
Studying blocks behavior during their lifetime in cache can provide useful information to reduce the miss rate and therefore improve processor performance. According to this rationale, the peLIFO replacement algorithm [M. Chaudhuri, Proc. Micro'09, New York, 12–16 December, 2009, pp. 401–412], which learns dynamically the number of cache ways required to satisfy short-term reuses preserving the remaining ways for long-term reuses, has been recently proposed. In this paper, we propose several changes to the original peLIFO policy in order to reduce the implementation complexity involved, and we extend the algorithm to a shared-cache environment considering dynamic information about threads behavior to improve cache efficiency. Experimental results confirm that our simplification techniques reduce the required hardware with a negligible performance penalty, while the best of our thread-aware extension proposals reduces average CPI by 8.7% and 15.2% on average compared to the original peLIFO and LRU respectively for a set of 43 multi-programmed workloads on an 8 MB 16-way set associative shared L2 cache.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, R., X. Wang, and X. Shi. "A replacement strategy for a distributed caching system based on the spatiotemporal access pattern of geospatial data." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4 (April 23, 2014): 133–37. http://dx.doi.org/10.5194/isprsarchives-xl-4-133-2014.

Full text
Abstract:
Cache replacement strategy is the core for a distributed high-speed caching system, and effects the cache hit rate and utilization of a limited cache space directly. Many reports show that there are temporal and spatial local changes in access patterns of geospatial data, and there are popular hot spots which change over time. Therefore, the key issue for cache replacement strategy for geospatial data is to get a combination method which considers both temporal local changes and spatial local changes in access patterns, and balance the relationship between the changes. And the cache replacement strategy should fit the distribution and changes of hotspot. This paper proposes a cache replacement strategy based on access pattern which have access spatiotemporal localities. Firstly, the strategy builds a method to express the access frequency and the time interval for geospatial data access based on a least-recently-used replacement (LRU) algorithm and its data structure; secondly, considering both the spatial correlation between geospatial data access and the caching location for geospatial data, it builds access sequences based on a LRU stack, which reflect the spatiotemporal locality changes in access pattern. Finally, for achieving the aim of balancing the temporal locality and spatial locality changes in access patterns, the strategy chooses the replacement objects based on the length of access sequences and the cost of caching resource consumption. Experimental results reveal that the proposed cache replacement strategy is able to improve the cache hit rate while achieving a good response performance and higher system throughput. Therefore, it can be applied to handle the intensity of networked GISs data access requests in a cloud-based environment.
APA, Harvard, Vancouver, ISO, and other styles
43

Yao, Haipeng, Chao Fang, Yiru Guo, and Chenglin Zhao. "An Optimal Routing Algorithm in Service Customized 5G Networks." Mobile Information Systems 2016 (2016): 1–7. http://dx.doi.org/10.1155/2016/6146435.

Full text
Abstract:
With the widespread use of Internet, the scale of mobile data traffic grows explosively, which makes 5G networks in cellular networks become a growing concern. Recently, the ideas related to future network, for example, Software Defined Networking (SDN), Content-Centric Networking (CCN), and Big Data, have drawn more and more attention. In this paper, we propose a service-customized 5G network architecture by introducing the ideas of separation between control plane and data plane, in-network caching, and Big Data processing and analysis to resolve the problems traditional cellular radio networks face. Moreover, we design an optimal routing algorithm for this architecture, which can minimize average response hops in the network. Simulation results reveal that, by introducing the cache, the network performance can be obviously improved in different network conditions compared to the scenario without a cache. In addition, we explore the change of cache hit rate and average response hops under different cache replacement policies, cache sizes, content popularity, and network topologies, respectively.
APA, Harvard, Vancouver, ISO, and other styles
44

Sridama, Prapai, Somchai Prakancharoen, and Nalinpat Porrawatpreyakorn. "Web Cache Replacement with the Repairable LRU Algorithm." International Review on Computers and Software (IRECOS) 10, no. 6 (June 30, 2015): 620. http://dx.doi.org/10.15866/irecos.v10i6.6746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

HIREMATH, SRIKANTHAIAH, and MAHMOUD A. MANZOUL. "AN IMPROVED FUZZY REPLACEMENT ALGORITHM FOR CACHE MEMORIES." Cybernetics and Systems 24, no. 4 (January 1993): 325–39. http://dx.doi.org/10.1080/01969729308961713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Megiddo, N., and D. S. Modha. "Outperforming LRU with an adaptive replacement cache algorithm." Computer 37, no. 4 (April 2004): 58–65. http://dx.doi.org/10.1109/mc.2004.1297303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Khalid, Humayun, and M. S. Obaidat. "Simulation Study of a Novel Cache Replacement Algorithm." SIMULATION 68, no. 4 (April 1997): 209–18. http://dx.doi.org/10.1177/003754979706800402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

MA, Zhi-guo, Hong-yuan ZHENG, and Qiu-lin DING. "Cache replacement algorithm of OLAM based on work warehouse." Journal of Computer Applications 29, no. 1 (June 25, 2009): 205–8. http://dx.doi.org/10.3724/sp.j.1087.2009.00205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Gharaibeh, Ammar, Ismail Hababeh, and Mustafa Alshawaqfeh. "An Efficient Online Cache Replacement Algorithm for 5G Networks." IEEE Access 6 (2018): 41179–87. http://dx.doi.org/10.1109/access.2018.2856913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Obaidat, M. S., and H. Khalid. "Estimating neural networks-based algorithm for adaptive cache replacement." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 28, no. 4 (1998): 602–11. http://dx.doi.org/10.1109/3477.704299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography