Academic literature on the topic 'Cache replacement algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cache replacement algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Cache replacement algorithms"

1

WANG, JAMES Z., and VIPUL BHULAWALA. "DESIGN AND IMPLEMENTATION OF A P2P COOPERATIVE PROXY CACHE SYSTEM." Journal of Interconnection Networks 08, no. 02 (June 2007): 147–62. http://dx.doi.org/10.1142/s0219265907001953.

Full text
Abstract:
In this paper, we design and implement a P2P cooperative proxy caching system based on a novel P2P cooperative proxy caching scheme. To effectively locate the cached web documents, a TTL-based routing protocol is proposed to manage the query and response messages in the P2P cooperative proxy cache system. Furthermore, we design a predict query-route algorithm to improve the TTL-based routing protocol by adding extra information in the query message packets. To select a suitable cache replacement algorithm for the P2P cooperative proxy cache system, three different cache replacement algorithms, LRU, LFU and SIZE, are evaluated using web trace based performance studies on the implemented P2P cooperative proxy cache system. The experimental results show that LRU is an overall better cache replacement algorithm for the P2P proxy cache system although SIZE based cache replacement approach produces slightly better cache hit ratio when cache size is very small. The performance studies also demonstrate that the proposed message routing protocols significantly improve the performance of the P2P cooperative proxy cache system, in terms of cache hit ratio, byte hit ratio, user request latency, and the number of query messages generated in the proxy cache system, compared to the flooding based message routing protocol.
APA, Harvard, Vancouver, ISO, and other styles
2

Prihozhy, A. A. "Simulation of direct mapped, k-way and fully associative cache on all pairs shortest paths algorithms." «System analysis and applied information science», no. 4 (December 30, 2019): 10–18. http://dx.doi.org/10.21122/2309-4923-2019-4-10-18.

Full text
Abstract:
Caches are intermediate level between fast CPU and slow main memory. It aims to store copies of frequently used data and to reduce the access time to the main memory. Caches are capable of exploiting temporal and spatial localities during program execution. When the processor accesses memory, the cache behavior depends on if the data is in cache: a cache hit occurs if it is, and, a cache miss occurs, otherwise. In the last case, the cache may have to evict other data. The misses produce processor stalls and slow down the computations. The replacement policy chooses a data to evict, trying to predict the future accesses to memory. The hit and miss rate depends on the cache type: direct mapped, set associative and fully associative cache. The least recently used replacement policy serves the sets. The miss rate strongly depends on the executed algorithm. The all pairs shortest paths algorithms solve many practical problems, and it is important to know what algorithm and what cache type match best. This paper presents a technique of simulating the direct mapped, k-way associative and fully associative cache during the algorithm execution, to measure the frequency of read data to cache and write data to memory operations. We have measured the frequencies versus the cache size, the data block size, the amount of processed data, the type of cache, and the type of algorithm. After comparing the basic and blocked Floyd-Warshall algorithms, we conclude that the blocked algorithm well localizes data accesses within one block, but it does not localize data dependencies among blocks. The direct mapped cache significantly loses the associative cache; we can improve its performance by appropriate mapping virtual addresses to physical locations.
APA, Harvard, Vancouver, ISO, and other styles
3

Vishnekov, A. V., and E. M. Ivanova. "DYNAMIC CONTROL METHODS OF CACHE LINES REPLACEMENT POLICY." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 191 (May 2020): 49–56. http://dx.doi.org/10.14489/vkit.2020.05.pp.049-056.

Full text
Abstract:
The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequential, locally-point, mixed. We suggest the procedure for selecting an effective replacement algorithm by support decision-making methods based on the current statistics of caching parameters. The paper gives the analysis of existing cache replacement algorithms. We propose a decision-making procedure for selecting an effective cache replacement algorithm based on the methods of ranking alternatives, preferences and hierarchy analysis. The critical number of cache hits, the average time of data query execution, the average cache latency are selected as indicators of initiation for the swapping procedure for the current replacement algorithm. The main advantage of the proposed approach is its universality. This approach assumes an adaptive decision-making procedure for the effective replacement algorithm selecting. The procedure allows the criteria variability for evaluating the replacement algorithms, its’ efficiency, and their preference for different types of program code. The dynamic swapping of the replacement algorithm with a more efficient one during the program execution improves the performance of the computer system.
APA, Harvard, Vancouver, ISO, and other styles
4

Vishnekov, A. V., and E. M. Ivanova. "DYNAMIC CONTROL METHODS OF CACHE LINES REPLACEMENT POLICY." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 191 (May 2020): 49–56. http://dx.doi.org/10.14489/vkit.2020.05.pp.049-056.

Full text
Abstract:
The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequential, locally-point, mixed. We suggest the procedure for selecting an effective replacement algorithm by support decision-making methods based on the current statistics of caching parameters. The paper gives the analysis of existing cache replacement algorithms. We propose a decision-making procedure for selecting an effective cache replacement algorithm based on the methods of ranking alternatives, preferences and hierarchy analysis. The critical number of cache hits, the average time of data query execution, the average cache latency are selected as indicators of initiation for the swapping procedure for the current replacement algorithm. The main advantage of the proposed approach is its universality. This approach assumes an adaptive decision-making procedure for the effective replacement algorithm selecting. The procedure allows the criteria variability for evaluating the replacement algorithms, its’ efficiency, and their preference for different types of program code. The dynamic swapping of the replacement algorithm with a more efficient one during the program execution improves the performance of the computer system.
APA, Harvard, Vancouver, ISO, and other styles
5

Begum, B. Shameedha, and N. Ramasubramanian. "Design of an Intelligent Data Cache with Replacement Policy." International Journal of Embedded and Real-Time Communication Systems 10, no. 2 (April 2019): 87–107. http://dx.doi.org/10.4018/ijertcs.2019040106.

Full text
Abstract:
Embedded systems are designed for a variety of applications ranging from Hard Real Time applications to mobile computing, which demands various types of cache designs for better performance. Since real-time applications place stringent requirements on performance, the role of the cache subsystem assumes significance. Reconfigurable caches meet performance requirements under this context. Existing reconfigurable caches tend to use associativity and size for maximizing cache performance. This article proposes a novel approach of a reconfigurable and intelligent data cache (L1) based on replacement algorithms. An intelligent embedded data cache and a dynamic reconfigurable intelligent embedded data cache have been implemented using Verilog 2001 and tested for cache performance. Data collected by enabling the cache with two different replacement strategies have shown that the hit rate improves by 40% when compared to LRU and 21% when compared to MRU for sequential applications which will significantly improve performance of embedded real time application.
APA, Harvard, Vancouver, ISO, and other styles
6

Yeung, Kai-Hau, and Kin-Yeung Wong. "An Unifying Replacement Approach for Caching Systems." Journal of Communications Software and Systems 3, no. 4 (December 20, 2007): 256. http://dx.doi.org/10.24138/jcomss.v3i4.247.

Full text
Abstract:
A cache replacement algorithm called probability based replacement (PBR) is proposed in this paper. The algorithm makes replacement decision based on the byte accessprobabilities of documents. This concept can be applied to both small conventional web documents and large video documents. The performance of PBR algorithm is studied by both analysis and simulation. By comparing cache hit probability, hit rate and average time spent in three systems, it is shown that the proposed algorithm outperforms the commonly used LRU and LFU algorithms. Simulation results show that, when large video documents are considered, the PBR algorithm provides up to 120% improvement in cache hit rate when comparing to that ofconventional algorithms. The uniqueness of this work is that, unlike previous studies that propose different solutions for different types of documents separately, the proposed PBR algorithm provides a simple and unified approach to serve different types of documents in a single system.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Tian, Wei Zhang, Tao Xu, and Guan Wang. "Research and Analysis of Design and Optimization of Magnetic Memory Material Cache Based on STT-MRAM." Key Engineering Materials 815 (August 2019): 28–34. http://dx.doi.org/10.4028/www.scientific.net/kem.815.28.

Full text
Abstract:
This paper proposes a cache replacement algorithm based on STT-MRAM magnetic memory, which aims to make the material system based on STT-MRAM magnetic memory better used. The algorithm replaces the data blocks in the cache by considering the position of the STT-MRAM magnetic memory head and the hardware characteristics of the STT-MRAM magnetic memory. This method will be different from the traditional magnetic memory-based common cache replacement algorithm. Traditional replacement algorithms are generally designed with only the algorithm to improve the cache, and the hardware characteristics of the storage device are ignored. This method can improve the material characteristics of the STT-MRAM magnetic memory by improving the cache life and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
8

P, Pratheeksha, and Revathi S. A. "Machine Learning-Based Cache Replacement Policies: A Survey." International Journal of Engineering and Advanced Technology 10, no. 6 (August 30, 2021): 19–22. http://dx.doi.org/10.35940/ijeat.f2907.0810621.

Full text
Abstract:
Despite extensive developments in improving cache hit rates, designing an optimal cache replacement policy that mimics Belady’s algorithm still remains a challenging task. Existing standard static replacement policies does not adapt to the dynamic nature of memory access patterns, and the diversity of computer programs only exacerbates the problem. Several factors affect the design of a replacement policy such as hardware upgrades, memory overheads, memory access patterns, model latency, etc. The amalgamation of a fundamental concept like cache replacement with advanced machine learning algorithms provides surprising results and drives the development towards cost-effective solutions. In this paper, we review some of the machine-learning based cache replacement policies that outperformed the static heuristics.
APA, Harvard, Vancouver, ISO, and other styles
9

Jeong, J., and M. Dubois. "Cache replacement algorithms with nonuniform miss costs." IEEE Transactions on Computers 55, no. 4 (April 2006): 353–65. http://dx.doi.org/10.1109/tc.2006.50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kharbutli, M., and Yan Solihin. "Counter-Based Cache Replacement and Bypassing Algorithms." IEEE Transactions on Computers 57, no. 4 (April 2008): 433–47. http://dx.doi.org/10.1109/tc.2007.70816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Cache replacement algorithms"

1

Altman, Erik R. (Erik Richter). "Genetic algorithms and cache replacement policy." Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61096.

Full text
Abstract:
The most common and generally best performing replacement algorithm in modern caches is LRU. Despite LRU's superiority, it is still possible that other feasible and implementable replacement policies could yield better performance. (34) found that an optimal replacement policy (OPT) would often have a miss rate 70% that of LRU.
If better replacement policies exist, they may not be obvious. One way to find better policies is to study a large number of address traces for common patterns. Such an undertaking involves such a large amount of data, that some automated method of generating and evaluating policies is required. Genetic Algorithms provide such a method, and have been used successfully on a wide variety of tasks (21).
The best replacement policy found using this approach had a mean improvement in overall hit rate of 0.6% over LRU for the benchmarks used. This corresponds to 27% of the 2.2% mean difference between LRU and OPT. Performance of the best of these replacement policies was found to be generally superior to shadow cache (33), an enhanced replacement policy similar to some of those used here.
APA, Harvard, Vancouver, ISO, and other styles
2

Moreira, Josilene Aires. "Cache strategies for internet-based video on-demand distribution." Universidade Federal de Pernambuco, 2011. https://repositorio.ufpe.br/handle/123456789/1659.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:51:44Z (GMT). No. of bitstreams: 2 arquivo2806_1.pdf: 3483412 bytes, checksum: cab776dc5a3fdf07c8cda900906f6a98 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Aires Moreira, Josilene; Fawzi Hadj Sadok, Djamel. Cache strategies for internet-based video on-demand distribution. 2011. Tese (Doutorado). Programa de Pós-Graduação em Ciência da Computação, Universidade Federal de Pernambuco, Recife, 2011.
APA, Harvard, Vancouver, ISO, and other styles
3

Valero, Bresó Alejandro. "Hybrid caches: design and data management." Doctoral thesis, Editorial Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/32663.

Full text
Abstract:
Cache memories have been usually implemented with Static Random-Access Memory (SRAM) technology since it is the fastest electronic memory technology. However, this technology consumes a high amount of leakage currents, which is a major design concern because leakage energy consumption increases as the transistor size shrinks. Alternative technologies are being considered to reduce this consumption. Among them, embedded Dynamic RAM (eDRAM) technology provides minimal area and leakage by design but reads are destructive and it is not as fast as SRAM. In this thesis, both SRAM and eDRAM technologies are mingled to take the advantatges that each of them o¿ers. First, they are combined at cell level to implement an n-bit macrocell consisting of one SRAM cell and n-1 eDRAM cells. The macrocell is used to build n-way set-associative hybrid ¿rst-level (L1) data caches having one SRAM way and n-1 eDRAM ways. A single SRAM way is enough to achieve good performance given the high data locality of L1 caches. Architectural mechanisms such as way-prediction, swaps, and scrub operations are considered to avoid unnecessary eDRAM reads, to maintain the Most Recently Used (MRU) data in the fast SRAM way, and to completely avoid refresh logic. Experimental results show that, compared to a conventional SRAM cache, leakage and area are largely reduced with a scarce impact on performance. The study of the bene¿ts of hybrid caches has been also carried out in second-level (L2) caches acting as Last-Level Caches (LLCs). In this case, the technologies are combined at bank level and the optimal ratio of SRAM and eDRAM banks that achieves the best trade-o¿ among performance, energy, and area is identi¿ed. Like in L1 caches, the MRU blocks are kept in the SRAM banks and they are accessed ¿rst to avoid unnecessary destructive reads. Nevertheless, refresh logic is not removed since data locality widely di¿ers in this cache level. Experimental results show that a hybrid LLC with an eighth of its banks built with SRAM technology is enough to achieve the best target trade-o¿. This dissertation also deals with performance of replacement policies in heterogeneous LLCs mainly focusing on the energy overhead incurred by refresh operations. In this thesis it is de¿ned a new concept, namely MRU-Tour (MRUT), that helps estimate reuse information of cache blocks. Based on this concept, it is proposed a family of MRUTbased replacement algorithms that randomly select the victim block among those having a single MRUT. These policies are enhanced to leverage recency of information for a few blocks and to adapt to changes in the working set of the benchmarks. Results show that the proposed MRUT policies, with simpler hardware complexity, outperform the Least Recently Used (LRU) policy and a set of the most representative state-of-the-art replacement policies for LLCs. Refresh operations represent an important fraction of the overall dynamic energy consumption of eDRAM LLCs. This fraction increases with the cache capacity, since more blocks have to be refreshed for a given period of time. Prior works have attacked the refresh energy taking into account inter-cell feature variations. Unlike these works, this thesis proposes a selective refresh policy based on the MRUT concept. The devised policy takes into account the number of MRUTs of a block to select whether the block is refreshed. In this way, many refreshes done in a typical distributed refresh policy are skipped (i.e., in those blocks having a single MRUT). This refresh mechanism is applied in the hybrid LLC memory. Results show that refresh energy consumption is largely reduced with respect to a conventional eDRAM cache, while the performance degradation is minimal with respect to a conventional SRAM cache.
Valero Bresó, A. (2013). Hybrid caches: design and data management [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/32663
Alfresco
Premiado
APA, Harvard, Vancouver, ISO, and other styles
4

Madhugiri, Shamsundar Abhiram. "Probability based cache replacement algorithm for the hypervisor cache." Thesis, Wichita State University, 2012. http://hdl.handle.net/10057/5532.

Full text
Abstract:
Virtualization is one of the key technologies which help in server consolidation, disaster recovery, and dynamic load balancing. The ratio of virtual machine to physical machine can be as high as 1:10, and this makes caching a key parameter, which affects the performance of Virtualization. Researchers have proposed the idea of having an exclusive hypervisor cache at the Virtual Machine Monitor (VMM) which could ease congestion and also improve the performance of the caching mechanism. Traditionally the Least Recently Used (LRU) algorithm is the cache replacement policy used in most caches. This algorithm has many drawbacks, such as no scan resistance, and hence does form an ideal candidate to be utilized in the hypervisor cache. To overcome this, this research focuses on development of a new algorithm known as the “Probability Based Cache Replacement Algorithm”. This algorithm does not evict memory addresses based on just the recency of memory traces, but it also considers the access history of all the addresses, making it scan resistant. In this research, a hypervisor cache is simulated using a C program and different workloads are tested in order to validate our proposal. This research shows that there is considerable improvement in performance using the Probability Based Cache Replacement Algorithm in comparison with the Traditional LRU algorithm.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
5

PUZAK, THOMAS ROBERTS. "ANALYSIS OF CACHE REPLACEMENT-ALGORITHMS." 1985. https://scholarworks.umass.edu/dissertations/AAI8509594.

Full text
Abstract:
This thesis describes a model used to analyze the replacement decisions made by LRU and OPT (Least-Recently-Used and an optimal replacement-algorithm). The model identifies a set of lines in the LRU cache that are dead, that is, lines that must leave the cache before they can be rereferenced. The model shows that the majority of the cache misses that OPT avoids over LRU come from the most-recently-discarded lines of the LRU cache. Also shown is that a very small set of lines account for the majority of the misses that OPT avoids over LRU. OPT requires perfect knowledge of the future and is not realizable, but our results lead to three realizable near-optimal replacement algorithms. These new algorithms try to duplicate the replacement decisions made by OPT. Simulation results, using a trace-tape and cache simulator, show that these new algorithms achieve up to eight percent fewer misses than LRU and obtain about 20 percent of the miss reduction that OPT obtains. Also presented in the thesis are two new trace-tape reduction techniques. Simulation results show that reductions in trace-tape length of two orders of magnitude are possible with little or no simulation error introduced.
APA, Harvard, Vancouver, ISO, and other styles
6

Katti, Anil Kumar. "Competitive cache replacement strategies for a shared cache." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-05-3584.

Full text
Abstract:
We consider cache replacement algorithms at a shared cache in a multicore system which receives an arbitrary interleaving of requests from processes that have full knowledge about their individual request sequences. We establish tight bounds on the competitive ratio of deterministic and randomized cache replacement strategies when processes share memory blocks. Our main result for this case is a deterministic algorithm called GLOBAL-MAXIMA which is optimum up to a constant factor when processes share memory blocks. Our framework is a generalization of the application controlled caching framework in which processes access disjoint sets of memory blocks. We also present a deterministic algorithm called RR-PROC-MARK which exactly matches the lower bound on the competitive ratio of deterministic cache replacement algorithms when processes access disjoint sets of memory blocks. We extend our results to multiple levels of caches and prove that an exclusive cache is better than both inclusive and non-inclusive caches; this validates the experimental findings in the literature. Our results could be applied to shared caches in multicore systems in which processes work together on multithreaded computations like Gaussian elimination paradigm, fast Fourier transform, matrix multiplication, etc. In these computations, processes have full knowledge about their individual request sequences and can share memory blocks.
text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Po Yao, and 王柏堯. "Wildcard Rule Caching and Cache Replacement Algorithms in Software-Defined Networking." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/g4y8ty.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chuo, Yen Cheng, and 卓彥呈. "Wildcard Rules Caching and Cache Replacement Algorithms in Software-Defined Networks." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/24916124967374174007.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
103
In Software-Defined Networking, flow tables of OpenFlow switches are implemented by ternary content addressable memory (TCAM). Although TCAM can process input packets in high speed, TCAM has four shortcomings, TCAM capacity, power consumption, heat generation, and board space problems. Rules caching is a technique to solve the TCAM capacity problem which is the most important problem in TCAM. However, the rule dependency problem is a challenging issue for wildcard rules caching where packets could mismatch rules. In this paper, we use cover set to solve rule dependency problem and cache important rules to TCAM. Instead of calculating contribution value of an individual rule, our wildcard rules caching algorithm calculates contribution value of a set of rules. Besides, we propose a cache replacement algorithm considering the temporal and spatial traffic localities. Simulation results show that our caching algorithm could get 10% average improvement ratio than previous work. Cache replacement algorithm could get 12% and 17% higher cache hit ratio than least recently used and random replacement algorithms, respectively.
APA, Harvard, Vancouver, ISO, and other styles
9

HSIEH, JUNG-MING, and 謝榮明. "Replacement Algorithms for Web Cache Server based on Least Frequency Ratios." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/37757226777565797999.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
91
Web Cache Server can reduce response time of Web Server, transmission amount of data, transmission delay and the chance of network bottom, such that Web Server can service more users and bandwidth is utilized better. But Web pages are inestimable and the resources are limit in Web Cache Server. It can’t contain Web pages of all users request. The Replacement Algorithms are a good solution for utility of limited resources and reducing access cost of clients and Web Server. Many algorithms are proposed for limited cache space can satisfy infinite request , for example:LRU、LFU、Size、GreedDual-Size ,etc. Some hit rate is high, but byte hit rate is low, some implement is hard, but byte hit rate is high in these method. To give consideration to two, the frequency ratios are proposed in this paper. The main feature: 1. Consider two segment reference frequency ratios, not reference times. 2. Predict reference trend for next cycle. 3. Temporal Locality. 4. Compare with GDS, Implement is simpler. 5. Hit rate and byte hit rate are more than LRU and LFU.
APA, Harvard, Vancouver, ISO, and other styles
10

Σπηλιωτακάρας, Αθανάσιος. "Διαχείριση κρυφής μνήμης επεξεργαστών με πρόβλεψη." Thesis, 2010. http://nemertes.lis.upatras.gr/jspui/handle/10889/3000.

Full text
Abstract:
Στον διαρκώς μεταβαλλόμενο τομέα της αρχιτεκτονικής των υπολογιστών, τα τελευταία 30 τουλάχιστον χρόνια οι αλλαγές έρχονται με εκθετικό ρυθμό. Οι κρυφές μνήμες αποτελούν πλέον το κέντρο του ενδιαφέροντος, αφού οι επεξεργαστές γίνονται ολοένα και ταχύτεροι, ολοένα και αποδοτικότεροι, αλλά τα κυκλώματα μνήμης αδυνατούν να τους ακολουθήσουν. Το επιστημονικό αυτό πεδίο στρέφεται πλέον σε έξυπνες λύσεις που έχουν ως στόχο την μείωση του κόστους επικοινωνίας μεταξύ των δύο υποσυστημάτων. Οι τρόποι διαχείρισης της κρυφής μνήμης αποτελούν έκφανση της πραγματικότητας αυτής και ένα από τα βασικότερα μέρη της είναι οι αλγόριθμοι αντικατάστασης. Η μελέτη εστιάζει στη σχέση ανάμεσα σε δύο, ήδη εφαρμοσμένων, νέων πολιτικών αντικατάστασης, καθώς και το βαθμό στον οποίο μπορεί να υπάρξει συγχώνευση τους σε μία καινούργια. Οι νέοι αλγόριθμοι που μελετάμε είναι ο αλγόριθμος αντικατάστασης IbRdPrediction (Instruction-based Reuse-Distance Prediction – Πρόβλεψης απόστασης επαναχρησιμοποίησης βασισμένης σε εντολή) και ο αλγόριθμος MLP-Aware (Memory level parallelism aware – επίγνωσης επιπέδου παραλληλισμού μνήμης). Εξετάζουμε κατά πόσο είναι δυνατόν να δημιουργηθεί ένας νέος μηχανισμός πρόβλεψης βασισμένος σε εντολη (instruction-based) που να λαμβάνει υπόψιν του τα χαρακτηριστικά του παραλληλισμού επιπέδου μνήμης (MLP) και κατα πόσο βελτιώνει τις ήδη υπάρχουσες τεχνικές ως προς την απόδοση του συστήματος.
In the continiously altering field of computer architecture, changes occur with exponential rate the last 30 years. Cache memories have become the pole of interest, as processors are growing all faster, all efficient, but memory circuits fail to follow them. The scientific community is now turning to clever solutions which aim to limit the two subsytem communication cost. Cache management consists the expression of this reality, and one of its most basic parts is cache replacement algorithms. The thesis focuses on the relation between two, already applied, recent replacement policies, and the degree in which their coalescence in a new policy can exist. We study the IbRdPrediction (Instruction-based Reuse-Distance Prediction) replacement algorithm and the MLP-Aware (Memory level parallelism aware) replacement algorithm. We thoroughly examine if it is possible to create a novel prediction mecahnism, based on instruction, that takes into account the MLP ((Memory level parallelism) characteristics, and how much it improves the existing techniques concerning system performance.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Cache replacement algorithms"

1

Aziz, Nazrina, Syariza Abdul-Rahman, and Norhaslina Zainal Abidin, eds. Recent Applications in Quantitative Methods and Information Technology. UUM Press, 2019. http://dx.doi.org/10.32890/9789672210269.

Full text
Abstract:
This book is a guide for researchers who are involved in statistical, mathematical, information technology and decision science analyses. The purpose of the book is to allow readers to get research ideas on a wide range of topics, such as sampling plans, capital budgeting, completion time in production line, searching pattern for mobile cache replacement policy, home security system with biometric finger print and web service technology. The analyses in each chapter are explained in detail with samples of real applications in daily life to assist readers to appreciate theoretical, algorithm and mathematical formulations. Prior to reading this book, readers are advised to have some basic foundation in statistical sampling, tabu search approach, neural network, algorithms, and mathematical formulation. This book will be beneficial to students and researchers who are looking for research topic of the research and how problems can be solved using an applied method.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Cache replacement algorithms"

1

Cohen, Edith, Balachander Krishnamurthy, and Jennifer Rexford. "Evaluating Server-Assisted Cache Replacement in the Web." In Algorithms — ESA’ 98, 307–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/3-540-68530-8_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vakali, A. I. "LRU-based algorithms for Web Cache Replacement." In Electronic Commerce and Web Technologies, 409–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44463-7_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kang, Yong-Kyoon, Ki-Chang Kim, and Yoo-Sung Kim. "Probability-Based Tile Pre-fetching and Cache Replacement Algorithms for Web Geographical Information Systems." In Advances in Databases and Information Systems, 127–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44803-9_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ghandeharizadeh, Shahram, Sandy Irani, and Jenny Lam. "Cache Replacement with Memory Allocation." In 2015 Proceedings of the Seventeenth Workshop on Algorithm Engineering and Experiments (ALENEX), 1–9. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2014. http://dx.doi.org/10.1137/1.9781611973754.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Jianmin, Bo Zhang, and Fuzong Lin. "A New Cache Replacement Algorithm in SMO." In Pattern Recognition with Support Vector Machines, 342–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45665-1_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Shuchang. "An Efficient Simulation Algorithm for Cache of Random Replacement Policy." In Lecture Notes in Computer Science, 144–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15672-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sasabe, Masahiro, Naoki Wakamiya, Masayuki Murata, and Hideo Miyahara. "Media Streaming on P2P Networks with Bio-inspired Cache Replacement Algorithm." In Biologically Inspired Approaches to Advanced Information Technology, 380–95. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-27835-1_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Yaoqiang, Chunxiao Xing, and Lizhu Zhou. "A Cache Replacement Algorithm in Hierarchical Storage of Continuous Media Object." In Advances in Web-Age Information Management, 157–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-27772-9_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Dong, Linpeng Huang, and Minglu Li. "LFC-K Cache Replacement Algorithm for Grid Index Information Service (GIIS)." In Lecture Notes in Computer Science, 795–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30208-7_107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Haraty, Ramzi A., and Lama Hasan Nahas. "A Recommended Replacement Algorithm for the Scalable Asynchronous Cache Consistency Scheme." In IT Convergence and Security 2017, 88–96. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6451-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Cache replacement algorithms"

1

Lee, Ming-Chang, Fang-Yie Leu, and Ying-Ping Chen. "Cache Replacement Algorithms for YouTube." In 2014 IEEE 28th International Conference on Advanced Information Networking and Applications (AINA). IEEE, 2014. http://dx.doi.org/10.1109/aina.2014.91.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Souza, Matheus, Henrique Cota Freitas, and Frédéric Pétrot. "Coherence State Awareness in Way-Replacement Algorithms for Multicore Processors." In XX Simpósio em Sistemas Computacionais de Alto Desempenho. Sociedade Brasileira de Computação, 2019. http://dx.doi.org/10.5753/wscad.2019.8672.

Full text
Abstract:
Due to their performance impact on program execution, cache replacement policies in set-associative caches have been studied in great depth. Currently, most general-purpose processors are multi-core, and among the very large corpus of research, and much to our surprise, we could not find any replacement policy that does actually take into account information relative to the sharing state of a cache way. Therefore, in this paper we propose to add, as a complement to the classical time-based related way-selection algorithms, an information relative to the sharing state and number of sharers of the ways. We propose several approaches to take this information into account, and our simulations show that LRU-based replacement policies can be slightly improved by them. Also, a much simpler policy, MRU, can be improved by our strategies, presenting up to 3.5× more IPC than baseline, and up to 82% less cache misses.
APA, Harvard, Vancouver, ISO, and other styles
3

Kedzierski, Kamil, Miquel Moreto, Francisco J. Cazorla, and Mateo Valero. "Adapting cache partitioning algorithms to pseudo-LRU replacement policies." In 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS). IEEE, 2010. http://dx.doi.org/10.1109/ipdps.2010.5470352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sheikh, Rami, and Mazen Kharbutli. "Improving cache performance by combining cost-sensitivity and locality principles in cache replacement algorithms." In 2010 IEEE International Conference on Computer Design (ICCD 2010). IEEE, 2010. http://dx.doi.org/10.1109/iccd.2010.5647594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Keqiu, Wenyu Qu, Hong Shen, Di Wu, and Takashi Nanya. "Two Cache Replacement Algorithms Based on Association Rules and Markov Models." In 2005 First International Conference on Semantics, Knowledge and Grid. IEEE, 2005. http://dx.doi.org/10.1109/skg.2005.136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zou, Xueqiang, and Chen Chen. "HQ: An Architecture for Web Cache Replacement Algorithms in Distributed Systems." In 2016 International Conference on Computer and Communication Engineering (ICCCE). IEEE, 2016. http://dx.doi.org/10.1109/iccce.2016.29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Butt, Ali R., Chris Gniady, and Y. Charlie Hu. "The performance impact of kernel prefetching on buffer cache replacement algorithms." In the 2005 ACM SIGMETRICS international conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1064212.1064231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sheu, Jang-Ping, Po-Yao Wang, and RB Jagadeesha. "Wildcard-rule caching and cache replacement algorithms in Software-Defined Networking." In 2017 European Conference on Networks and Communications (EuCNC). IEEE, 2017. http://dx.doi.org/10.1109/eucnc.2017.7980654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"ANALYSIS OF WEB-PROXY CACHE REPLACEMENT ALGORITHMS UNDER STEADY-STATE CONDITIONS." In 3rd International Conference on Web Information Systems and Technologies. SciTePress - Science and and Technology Publications, 2007. http://dx.doi.org/10.5220/0001285702530260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mourad, Amr Abdelhai, Saleh Mesbah, and Tamer F. Mabrouk. "A Novel Approach to Cache Replacement Policy Model Based on Genetic Algorithms." In 2020 Fourth World Conference on Smart Trends in Systems Security and Sustainablity (WorldS4). IEEE, 2020. http://dx.doi.org/10.1109/worlds450073.2020.9210347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography