Academic literature on the topic 'Cache Eviction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cache Eviction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Cache Eviction"

1

Jaamoum, Amine, Thomas Hiscock, and Giorgio Di Natale. "Noise-Free Security Assessment of Eviction Set Construction Algorithms with Randomized Caches." Applied Sciences 12, no. 5 (2022): 2415. http://dx.doi.org/10.3390/app12052415.

Full text
Abstract:
Cache timing attacks, i.e., a class of remote side-channel attack, have become very popular in recent years. Eviction set construction is a common step for many such attacks, and algorithms for building them are evolving rapidly. On the other hand, countermeasures are also being actively researched and developed. However, most countermeasures have been designed to secure last-level caches and few of them actually protect the entire memory hierarchy. Cache randomization is a well-known mitigation technique against cache attacks that has a low-performance overhead. In this study, we attempted to determine whether address randomization on first-level caches is worth considering from a security perspective. In this paper, we present the implementation of a noise-free cache simulation framework that enables the analysis of the behavior of eviction set construction algorithms. We show that randomization at the first level of caches (L1) brings about improvements in security but is not sufficient to mitigate all known algorithms, such as the recently developed Prime–Prune–Probe technique. Nevertheless, we show that L1 randomization can be combined with a lightweight random eviction technique in higher-level caches to mitigate known conflict-based cache attacks.
APA, Harvard, Vancouver, ISO, and other styles
2

Dong, Chao, Fang Wang, Hong Jiang, and Dan Feng. "Using Lock-Free Design for Throughput-Optimized Cache Eviction." ACM SIGMETRICS Performance Evaluation Review 53, no. 1 (2025): 49–51. https://doi.org/10.1145/3744970.3727330.

Full text
Abstract:
This paper presents a practical approach to cache eviction algorithm design, called Mobius, that optimizes the concurrent throughput of caches and reduces cache operation latency by utilizing lock-free data structures, while maintaining high cache hit ratios. Mobius includes two key designs. First, Mobius employs two lock-free FIFO queues to manage cache items, ensuring that all cache operations are executed efficiently in concurrency. Second, Mobius integrates a consecutive detection mechanism that merges multiple modifications during eviction into a single operation, thereby reducing data races. The implementation of Mobius in CacheLib and RocksDB highlights its high concurrency in both synthetic and real-world workloads.
APA, Harvard, Vancouver, ISO, and other styles
3

Dong, Chao, Fang Wang, Hong Jiang, and Dan Feng. "Using Lock-Free Design for Throughput-Optimized Cache Eviction." Proceedings of the ACM on Measurement and Analysis of Computing Systems 9, no. 2 (2025): 1–28. https://doi.org/10.1145/3727136.

Full text
Abstract:
In large-scale information systems, storage device performance continues to improve while workloads expand in size and access characteristics. This growth puts tremendous pressure on caches and storage hierarchy in terms of concurrent throughput. However, existing cache eviction policies often struggle to provide adequate concurrent throughput due to their reliance on coarse-grained locking mechanisms and complex data structures. This paper presents a practical approach to cache eviction algorithm design, called Mobius, that optimizes the concurrent throughput of caches and reduces cache operation latency by utilizing lock-free data structures, while maintaining comparable hit ratios. Mobius includes two key designs. First, Mobius employs two lock-free FIFO queues to manage cache items, ensuring that all cache operations are executed efficiently in parallel. Second, Mobius integrates a consecutive detection mechanism that merges multiple modifications during eviction into a single operation, thereby reducing data races. Extensive evaluations using both synthetic and real-world workloads from high-concurrency clusters demonstrate that Mobius achieves a concurrent-throughput improvement ranging from 1.2× to 8.5× over state-of-the-art methods, while also maintaining lower latency and comparable cache hit ratios. The implementation of Mobius in CacheLib and RocksDB highlights its effectiveness in enhancing cache performance in practical scenarios.
APA, Harvard, Vancouver, ISO, and other styles
4

Ge, Fen, Lei Wang, Ning Wu, and Fang Zhou. "A Cache Fill and Migration Policy for STT-RAM-Based Multi-Level Hybrid Cache in 3D CMPs." Electronics 8, no. 6 (2019): 639. http://dx.doi.org/10.3390/electronics8060639.

Full text
Abstract:
Recently, in 3D Chip-Multiprocessors (CMPs), a hybrid cache architecture of SRAM and Non-Volatile Memory (NVM) is generally used to exploit high density and low leakage power of NVM and a low write overhead of SRAM. The conventional access policy does not consider the hybrid cache and cannot make good use of the characteristics of both NVM and SRAM technology. This paper proposes a Cache Fill and Migration policy (CFM) for multi-level hybrid cache. In CFM, data access was optimized in three aspects: Cache fill, cache eviction, and dirty data migration. The CFM reduces unnecessary cache fill, write operations to NVM, and optimizes the victim cache line selection in cache eviction. The results of experiments show that the CFM can improve performance by 24.1% and reduce power consumption by 18% when compared to conventional writeback access policy.
APA, Harvard, Vancouver, ISO, and other styles
5

Chuchuk, Olga, and Markus Schulz. "Data Popularity for Cache Eviction Algorithms using Random Forests." EPJ Web of Conferences 295 (2024): 01015. http://dx.doi.org/10.1051/epjconf/202429501015.

Full text
Abstract:
In the HEP community the prediction of Data Popularity is a topic that has been approached for many years. Nonetheless, while facing increasing data storage challenges, especially in the upcoming HL-LHC era, there is still the need for better predictive models to answer the questions of whether particular data should be kept, replicated, or deleted. Caches have proven to be a convenient technique for partially automating storage management, potentially eliminating some of these questions. On the one hand, one can benefit even from simple cache eviction policies like LRU, on the other hand, we show that incorporation of knowledge about future access patterns has the potential to greatly improve cache performance. In this paper, we study data popularity on the file level, where the special relation between files belonging to the same dataset could be used in addition to the standard attributes. We turn to Machine Learning algorithms, such as Random Forest, which is well suited to work with Big Data: it can be parallelized, is more lightweight and easier to interpret than Deep Neural Networks. Finally, we compare the results with standard cache eviction algorithms and the theoretical optimum.
APA, Harvard, Vancouver, ISO, and other styles
6

Rashid, Salman, Shukor Abd Razak, and Fuad A. Ghaleb. "IMU: A Content Replacement Policy for CCN, Based on Immature Content Selection." Applied Sciences 12, no. 1 (2021): 344. http://dx.doi.org/10.3390/app12010344.

Full text
Abstract:
In-network caching is the essential part of Content-Centric Networking (CCN). The main aim of a CCN caching module is data distribution within the network. Each CCN node can cache content according to its placement policy. Therefore, it is fully equipped to meet the requirements of future networks demands. The placement strategy decides to cache the content at the optimized location and minimize content redundancy within the network. When cache capacity is full, the content eviction policy decides which content should stay in the cache and which content should be evicted. Hence, network performance and cache hit ratio almost equally depend on the content placement and replacement policies. Content eviction policies have diverse requirements due to limited cache capacity, higher request rates, and the rapid change of cache states. Many replacement policies follow the concept of low or high popularity and data freshness for content eviction. However, when content loses its popularity after becoming very popular in a certain period, it remains in the cache space. Moreover, content is evicted from the cache space before it becomes popular. To handle the above-mentioned issue, we introduced the concept of maturity/immaturity of the content. The proposed policy, named Immature Used (IMU), finds the content maturity index by using the content arrival time and its frequency within a specific time frame. Also, it determines the maturity level through a maturity classifier. In the case of a full cache, the least immature content is evicted from the cache space. We performed extensive simulations in the simulator (Icarus) to evaluate the performance (cache hit ratio, path stretch, latency, and link load) of the proposed policy with different well-known cache replacement policies in CCN. The obtained results, with varying popularity and cache sizes, indicate that our proposed policy can achieve up to 14.31% more cache hits, 5.91% reduced latency, 3.82% improved path stretch, and 9.53% decreased link load, compared to the recently proposed technique. Moreover, the proposed policy performed significantly better compared to other baseline approaches.
APA, Harvard, Vancouver, ISO, and other styles
7

Anurag, Reddy, Naik Anil, and Reddy Sandeep. "Optimizing Cache Storage for Next-Generation Immersive Experiences: A Strategic Framework for high Content Delivery in Content Delivery Networks (CDNs)." Journal of Scientific and Engineering Research 8, no. 9 (2021): 237–41. https://doi.org/10.5281/zenodo.10903118.

Full text
Abstract:
<strong>Abstract </strong>This paper explores the critical role of cache storage capacity within Content Delivery Networks (CDNs), in the context of its implications for augmented reality (AR) and virtual reality (VR) content, accentuating its strategic importance in optimizing content distribution and augmenting user experiences in these immersive environments. It investigates key variables such as content popularity, cache hit ratio, retention policy, eviction strategy, cache size, and content size distribution, providing insights into their impact on storage space optimization. The paper outlines the process of calculating the current eviction age, leveraging data collected at the node level. It introduces a forecasting approach that considers total current storage capacity, target eviction age, and a 2.5% month-over-month growth rate to estimate future storage needs and node requirements, especially pertinent in the context of the evolving demands of AR/VR content. Beyond technical aspects, the paper discusses the practical applications of model outputs in decision-making, guiding strategic node deployment and optimizing service performance. It encourages a dynamic approach to cache service growth metrics and suggests exploring long-term database integration for enhanced historical perspectives. Additionally, the paper introduces the concept of exploring the linearity between disk size and cache retention, proposing potential integration into the model for improved predictive accuracy. In essence, it serves as a comprehensive guide for understanding, optimizing, and strategically leveraging cache storage capacity in the dynamic landscape of CDNs.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Dehua, Sha Tao, and Wanlin Gao. "Applying Address Encryption and Timing Noise to Enhance the Security of Caches." Electronics 12, no. 8 (2023): 1799. http://dx.doi.org/10.3390/electronics12081799.

Full text
Abstract:
Encrypting the mapping relationship between physical and cache addresses has been a promising technique to prevent conflict-based cache side-channel attacks. However, this method is not foolproof and the attackers can still build a side-channel despite the increased difficulty of finding the minimal eviction set. To address this issue, we propose a new protection method that integrates both address encryption and timing noise extension mechanisms. By adding the timing noise extension mechanism to the address encryption method, we can randomly generate cache misses that prevent the attackers from pruning the eviction set. Our analysis shows that the timing noise extension mechanism can cause the attackers to fail in obtaining accurate timing information for accessing memory. Furthermore, our proposal reduces the timing noise generating rate, minimizing performance overhead. Our experiments on SPEC CPU 2017 show that the integrated mechanism only resulted in a tiny performance overhead of 2.9%.
APA, Harvard, Vancouver, ISO, and other styles
9

Batool, Sidra, Muhammad Kaleem, Salman Rashid, Muhammad Azhar Mushtaq, and Iqra Khan. "A survey of classification cache replacement techniques in the contentcentric networking domain." International Journal of ADVANCED AND APPLIED SCIENCES 11, no. 5 (2024): 12–24. http://dx.doi.org/10.21833/ijaas.2024.05.002.

Full text
Abstract:
Content-Centric Networking (CCN) is an innovative approach that emphasizes content. A key strategy in CCN for spreading data across the network is in-network caching. Effective caching methods, including content placement and removal tactics, enhance the use of network resources. Cache replacement, also known as content eviction policies, is essential for maximizing CCN's efficiency. When cache storage is full, some content must be removed to make room for new items due to limited storage space. Recently, several advanced replacement strategies have been developed to determine the most suitable content for eviction. This study categorizes the latest cache replacement strategies into various groups such as static, space scarcity, content update, centralized, energy-efficient, weighted, adaptive, and based on dynamic popularity. These categories are based on the approaches suggested in previous research. Additionally, this paper provides a critical analysis of existing methods and suggests future research directions. To the best of our knowledge, this is the most up-to-date and comprehensive review available on this topic.
APA, Harvard, Vancouver, ISO, and other styles
10

Pan, Cheng, Xiaolin Wang, Yingwei Luo, and Zhenlin Wang. "Penalty- and Locality-aware Memory Allocation in Redis Using Enhanced AET." ACM Transactions on Storage 17, no. 2 (2021): 1–45. http://dx.doi.org/10.1145/3447573.

Full text
Abstract:
Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Cache Eviction"

1

Metreveli, Zviad. "CPHASH : a cache-partitioned hash table with LRU eviction." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66445.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.<br>In title on title page,"ASH" appears as subscript upper case letters. Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 45-47).<br>In this thesis we introduce CPHASH - a scalable fixed size hash table that supports eviction using an LRU list, and CPSERVER - a scalable in memory key/value cache server that uses CPHASH to implement its hash table. CPHASH uses computation migration to avoid transferring data between cores. Experiments on a 48 core machine show that CPHASH has 2 to 3 times higher throughput than a hash table implemented using scalable fine-grained locks. CPSERVER achieves 1.2 to 1.7 times higher throughput than a key/value cache server that uses a hash table with scalable fine-grained locks and 1.5 to 2.6 times higher throughput than MEMCACHED.<br>by Zviad Metreveli.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
2

Weisenborn, Hildebrand J. "Video popularity metrics and bubble cache eviction algorithm analysis." Thesis, University of Essex, 2018. http://repository.essex.ac.uk/22350/.

Full text
Abstract:
Video data is the largest type of traffic in the Internet, currently responsible for over 72% of the total traffic, with over 883PB of data per month in 2016. Large scale CDN solutions are available that offer a variety of distributed hosting platforms for the purpose of transmitting video over IP. However, the IP protocol, unlike ICN protocol implementations, does not provide an any-cast architecture from which a CDN would greatly benefit. In this thesis we introduce a novel cache eviction strategy called ``Bubble,'' as well as two variants of Bubble, that can be applied to any-cast protocols to aid in optimising video delivery. Bubble, Bubble-LRU and Bubble-Insert were found to greatly reduce the quantity of video associated traffic observed in cache enabled networks. Additionally, analysis on two British Telecom (BT) provided video popularity distributions leveraging Kullback-Leibler and Pearson Chi-Squared testing methods was performed. This was done to assess which model, Zipf or Zipf-Mandelbrot, is best suited to replicate video popularity distributions and the results of these tests conclude that Zipf-Mandelbrot is the most appropriate model to replicate video popularity distributions. The work concludes that the novel cache eviction algorithms introduced in this thesis provide an efficient caching mechanism for future content delivery networks and that the modelled Zipf-Mandelbrot distribution is a better method for simulating the performance of caching algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Lindqvist, Maria. "Dynamic Eviction Set Algorithms and Their Applicability to Cache Characterisation." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420317.

Full text
Abstract:
Eviction sets are groups of memory addresses that map to the same cache set. They can be used to perform efficient information-leaking attacks against the cache memory, so-called cache side channel attacks. In this project, two different algorithms that find such sets are implemented and compared. The second of the algorithms improves on the first by using a concept called group testing. It is also evaluated if these algorithms can be used to analyse or reverse engineer the cache characteristics, which is a new area of application for this type of algorithms. The results show that the optimised algorithm performs significantly better than the previous state-of-the-art algorithm. This means that countermeasures developed against this type of attacks need to be designed with the possibility of faster attacks in mind. The results also shows, as a proof-of-concept, that it is possible to use these algorithms to create a tool for cache analysis.
APA, Harvard, Vancouver, ISO, and other styles
4

Cheng, Ping, and 鄭評. "Design of MLC STT-RAM-based Last Level Cache to Reduce Read/Write Disturbances by Early Eviction and Swapping." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3zm46q.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Cache Eviction"

1

Mukhtar, Muhammad Asim, Muhammad Khurram Bhatti, and Guy Gogniat. "IE-Cache: Counteracting Eviction-Based Cache Side-Channel Attacks Through Indirect Eviction." In ICT Systems Security and Privacy Protection. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58201-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Jung Hwa, Se Jin Kwon, and Tae-Sun Chung. "ERF: Efficient Cache Eviction Strategy for E-commerce Applications." In Mobile and Wireless Technologies 2017. Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-5281-1_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Savary, Lionel, Georges Gardarin, and Karine Zeitouni. "GeoCache." In Geographic Information Systems. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2038-4.ch034.

Full text
Abstract:
GML is a promising model for integrating geodata within data warehouses. The resulting databases are generally large and require spatial operators to be handled. Depending on the size of the target geographical data and the number and complexity of operators in a query, the processing time may quickly become prohibitive. To optimize spatial queries over GML encoded data, this paper introduces a novel cache-based architecture. A new cache replacement policy is then proposed. It takes into account the containment properties of geographical data and predicates, and allows evicting the most irrelevant values from the cache. Experiences with the GeoCache prototype show the effectiveness of the proposed architecture with the associated replacement policy, compared to existing works.
APA, Harvard, Vancouver, ISO, and other styles
4

Savary, Lionel, Georges Gardarin, and Karine Zeitouni. "GeoCache." In Data Warehousing and Mining. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-951-9.ch040.

Full text
Abstract:
GML is a promising model for integrating geodata within data warehouses. The resulting databases are generally large and require spatial operators to be handled. Depending on the size of the target geographical data and the number and complexity of operators in a query, the processing time may quickly become prohibitive. To optimize spatial queries over GML encoded data, this chapter introduces a novel cache-based architecture. A new cache replacement policy is then proposed. It takes into account the containment properties of geographical data and predicates, and allows evicting the most irrelevant values from the cache. Experiences with the GeoCache prototype show the effectiveness of the proposed architecture with the associated replacement policy, compared to existing works.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Cache Eviction"

1

Shevchenko, Olena, Matvii Kuchapin, Zoia Dudar, and Mariya Shirokopetleva. "Enhancing Redis Cache Efficiency Based on Dynamic TTL and Adaptive Eviction Mechanism." In 2025 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream). IEEE, 2025. https://doi.org/10.1109/estream66938.2025.11016870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, Chetan, and Arijit Nath. "RECminThrash: Recency and Eviction Count Based Cache Replacement Policy to Minimize Thrashing at the Last Level Caches." In 2025 26th International Symposium on Quality Electronic Design (ISQED). IEEE, 2025. https://doi.org/10.1109/isqed65160.2025.11014354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Yilong, Guoxia Wang, Junyuan Shang, et al. "NACL: A General and Effective KV Cache Eviction Framework for LLM at Inference Time." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.acl-long.428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Morgan, Bradley, Gal Horowitz, Sioli O'Connell, et al. "Slice+Slice Baby: Generating Last-Level Cache Eviction Sets in the Blink of an Eye." In 2025 IEEE Symposium on Security and Privacy (SP). IEEE, 2025. https://doi.org/10.1109/sp61157.2025.00264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Zhe, and Junmin Wu. "Crowd: An KV Cache Eviction Policy Which Uses Crowd Information to Select Evicted Key-Value Pairs." In 2024 4th International Conference on Computer Science and Blockchain (CCSB). IEEE, 2024. http://dx.doi.org/10.1109/ccsb63463.2024.10735473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tran, Joe, and Byeong Kil Lee. "EFI: Cache Replacement Policy Using Eviction Frequency Integration." In 2023 Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE). IEEE, 2023. http://dx.doi.org/10.1109/csce60160.2023.00152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Lingda, Dong Tong, Zichao Xie, Junlin Lu, and Xu Cheng. "Improving inclusive cache performance with two-level eviction priority." In 2012 IEEE 30th International Conference on Computer Design (ICCD 2012). IEEE, 2012. http://dx.doi.org/10.1109/iccd.2012.6378668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dalui, Mamata, Tannishtha Som, Shivani Bansal, Shivam Pant, and Biplab K. Sikdar. "MASI: An eviction aware cache coherence protocol for CMPs." In 2016 Sixth International Symposium on Embedded Computing and System Design (ISED). IEEE, 2016. http://dx.doi.org/10.1109/ised.2016.7977091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Juncheng, Yazhuo Zhang, Ziyue Qiu, Yao Yue, and Rashmi Vinayak. "FIFO queues are all you need for cache eviction." In SOSP '23: 29th Symposium on Operating Systems Principles. ACM, 2023. http://dx.doi.org/10.1145/3600006.3613147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dong, Chao, Fang Wang, Hong Jiang, and Dan Feng. "Using Lock-Free Design for Throughput-Optimized Cache Eviction." In SIGMETRICS '25: ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems. ACM, 2025. https://doi.org/10.1145/3726854.3727330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!