Academic literature on the topic 'Cache Hits'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cache Hits.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Cache Hits"

1

Wijaya, Marvin Chandra. "Improving Cache Hits On Replacment Blocks Using Weighted LRU-LFU Combinations." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 10 (2023): 1542–50. http://dx.doi.org/10.17762/ijritcc.v11i10.8706.

Full text
Abstract:
Block replacement refers to the process of selecting a block of data or a cache line to be evicted or replaced when a new block needs to be brought into a cache or a memory hierarchy. In computer systems, block replacement policies are used in caching mechanisms, such as in CPU caches or disk caches, to determine which blocks are evicted when the cache is full and new data needs to be fetched. The combination of LRU (Least Recently Used) and LFU (Least Frequently Used) in a weighted manner is known as the "LFU2" algorithm. LFU2 is an enhanced caching algorithm that aims to leverage the benefits of both LRU and LFU by considering both recency and frequency of item access. In LFU2, each item in the cache is associated with two counters: the usage counter and the recency counter. The usage counter tracks the frequency of item access, while the recency counter tracks the recency of item access. These counters are used to calculate a combined weight for each item in the cache. Based on the experimental results, the LRU-LFU combination method succeeded in increasing cache hits from 94.8% on LFU and 95.5% on LFU to 96.6%.
APA, Harvard, Vancouver, ISO, and other styles
2

Mutanga, Alfred. "A SystemC Cache Simulator for a Multiprocessor Shared Memory System." International Letters of Social and Humanistic Sciences 13 (October 2013): 75–87. http://dx.doi.org/10.18052/www.scipress.com/ilshs.13.75.

Full text
Abstract:
In this research we built a SystemC Level-1 data cache system in a distributed shared memory architectural environment, with each processor having its own local cache. Using a set of Fast-Fourier Transform and Random trace files we evaluated the cache performance, based on the number of cache hits/misses, of the caches using snooping and directory-based cache coherence protocols. A series of experiments were carried out, with the results of the experiments showing that the directory-based MOESI cache coherency protocol has a performance edge over the snooping Valid-Invalid cache coherency protocol.
APA, Harvard, Vancouver, ISO, and other styles
3

CHEN, HSIN-CHUAN, and JEN-SHIUN CHIANG. "A HIGH-PERFORMANCE SEQUENTIAL MRU CACHE USING VALID-BIT ASSISTANT SEARCH ALGORITHM." Journal of Circuits, Systems and Computers 16, no. 04 (2007): 613–26. http://dx.doi.org/10.1142/s0218126607003824.

Full text
Abstract:
Most recently used (MRU) cache is one of the set-associative caches that emphasize implementation of associativity higher than 2. However, the access time is increased because the MRU information must be fetched before accessing the sequential MRU (SMRU) cache. In this paper, focusing on the SMRU cache with subblock placement, we propose an MRU cache scheme that separates the valid bits from data memory and uses these valid bits to decide to reduce the unnecessary access number of memory banks. By this approach, the probability of the front hits is thus increased, and it significantly helps in improving the average access time of the SMRU cache without valid-bit assistant search especially for large associativity and small subblock size.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Junghwan, Myeong-Cheol Ko, Moon Sun Shin, and Jinsoo Kim. "A Novel Prefix Cache with Two-Level Bloom Filters in IP Address Lookup." Applied Sciences 10, no. 20 (2020): 7198. http://dx.doi.org/10.3390/app10207198.

Full text
Abstract:
Prefix caching is one of the notable techniques in enhancing the IP address lookup performance which is crucial in packet forwarding. A cached prefix can match a range of IP addresses, so prefix caching leads to a higher cache hit ratio than IP address caching. However, prefix caching has an issue to be resolved. When a prefix is matched in a cache, the prefix cannot be the result without assuring that there is no longer descendant prefix of the matching prefix which is not cached yet. This is due to the aspect of the IP address lookup seeking to find the longest matching prefix. Some prefix expansion techniques avoid the problem, but the expanded prefixes occupy more entries as well as cover a smaller range of IP addresses. This paper proposes a novel prefix caching scheme in which the original prefix can be cached without expansion. In this scheme, for each prefix, a Bloom filter is constructed to be used for testing if there is any matchable descendant. The false positive ratio of a Bloom filter generally grows as the number of elements contained in the filter increases. We devise an elaborate two-level Bloom filter scheme which adjusts the filter size at each level, to reduce the false positive ratio, according to the number of contained elements. The experimental result shows that the proposed scheme achieves a very low cache miss ratio without increasing the number of prefixes. In addition, most of the filter assertions are negative, which means the proposed prefix cache effectively hits the matching prefix using the filter.
APA, Harvard, Vancouver, ISO, and other styles
5

Wijaya, Marvin Chandra. "Distributed proxy cache replacement algorithm to improve web server performance." Jurnal Teknologi dan Sistem Komputer 8, no. 1 (2019): 1–5. http://dx.doi.org/10.14710/jtsiskom.8.1.2020.1-5.

Full text
Abstract:
The performance of web processing needs to increase to meet the growth of internet usage, one of which is by using cache on the web proxy server. This study examines the implementation of the proxy cache replacement algorithm to increase cache hits in the proxy server. The study was conducted by creating a clustered or distributed web server system using eight web server nodes. The system was able to provide increased latency by 90 % better and increased throughput of 5.33 times better.
APA, Harvard, Vancouver, ISO, and other styles
6

Qazi, Faiza, Osman Khalid, Rao Naveed Bin Rais, Imran Ali Khan, and Atta ur Rehman Khan. "Optimal Content Caching in Content-Centric Networks." Wireless Communications and Mobile Computing 2019 (January 23, 2019): 1–15. http://dx.doi.org/10.1155/2019/6373960.

Full text
Abstract:
Content-Centric Networking (CCN) is a novel architecture that is shifting host-centric communication to a content-centric infrastructure. In recent years, in-network caching in CCNs has received significant attention from research community. To improve the cache hit ratio, most of the existing schemes store the content at maximum number of routers along the downloading path of content from source. While this helps in increased cache hits and reduction in delay and server load, the unnecessary caching significantly increases the network cost, bandwidth utilization, and storage consumption. To address the limitations in existing schemes, we propose an optimization based in-network caching policy, named as opt-Cache, which makes more efficient use of available cache resources, in order to reduce overall network utilization with reduced latency. Unlike existing schemes that mostly focus on a single factor to improve the cache performance, we intend to optimize the caching process by simultaneously considering various factors, e.g., content popularity, bandwidth, and latency, under a given set of constraints, e.g., available cache space, content availability, and careful eviction of existing contents in the cache. Our scheme determines optimized set of content to be cached at each node towards the edge based on content popularity and content distance from the content source. The contents that have less frequent requests have their popularity decreased with time. The optimal placement of contents across the CCN routers allows the overall reduction in bandwidth and latency. The proposed scheme is compared with the existing schemes and depicts better performance in terms of bandwidth consumption and latency while using less network resources.
APA, Harvard, Vancouver, ISO, and other styles
7

Qadri, Muhammad Yasir, Nadia N. Qadri, Martin Fleury, and Klaus D. McDonald-Maier. "Software-Controlled Instruction Prefetch Buffering for Low-End Processors." Journal of Circuits, Systems and Computers 24, no. 10 (2015): 1550161. http://dx.doi.org/10.1142/s0218126615501613.

Full text
Abstract:
This paper proposes a method of buffering instructions by software-based prefetching. The method allows low-end processors to improve their instruction throughput with a minimum of additional logic and power consumption. Low-end embedded processors do not employ caches for mainly two reasons. The first reason is that the overhead of cache implementation in terms of energy and area is considerable. The second reason is that, because a cache's performance primarily depends on the number of hits, an increasing number of misses could cause a processor to remain in stall mode for a longer duration. As a result, a cache may become more of a liability than an advantage. In contrast, the benchmarked results for the proposed software-based prefetch buffering without a cache show a 5–10% improvement in execution time. They also show a 4% or more reduction in the energy-delay-square-product (ED2P) with a maximum reduction of 40%. The results additionally demonstrate that the performance and efficiency of the proposed architecture scales with the number of multicycle instructions. The benchmarked routines tested to arrive at these results are widely deployed components of embedded applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Ahmadi, Saad. "A New Efficient Cache Replacement Strategy for Named Data Networking." International journal of Computer Networks & Communications 13, no. 5 (2021): 19–35. http://dx.doi.org/10.5121/ijcnc.2021.13502.

Full text
Abstract:
The Information-Centric Network (ICN) is a future internet architecture with efficient content retrieval and distribution. Named Data Networking (NDN) is one of the proposed architectures for ICN. NDN’s innetwork caching improves data availability, reduce retrieval delays, network load, alleviate producer load, and limit data traffic. Despite the existence of several caching decision algorithms, the fetching and distribution of contents with minimum resource utilization remains a great challenge. In this paper, we introduce a new cache replacement strategy called Enhanced Time and Frequency Cache Replacement strategy (ETFCR) where both cache hit frequency and cache retrieval time are used to select evicted data chunks. ETFCR adds time cycles between the last two requests to adjust data chunk’s popularity and cache hits. We conducted extensive simulations using the ccnSim simulator to evaluate the performance of ETFCR and compare it to that of some well-known cache replacement strategies. Simulations results show that ETFCR outperforms the other cache replacement strategies in terms of cache hit ratio, and lower content retrieval delay.
APA, Harvard, Vancouver, ISO, and other styles
9

Titarenko, Larysa, Vyacheslav Kharchenko, Vadym Puidenko, Artem Perepelitsyn, and Alexander Barkalov. "Hardware-Based Implementation of Algorithms for Data Replacement in Cache Memory of Processor Cores." Computers 13, no. 7 (2024): 166. http://dx.doi.org/10.3390/computers13070166.

Full text
Abstract:
Replacement policies have an important role in the functioning of the cache memory of processor cores. The implementation of a successful policy allows us to increase the performance of the processor core and the computer system as a whole. Replacement policies are most often evaluated by the percentage of cache hits during the cycles of the processor bus when accessing the cache memory. The policies that focus on replacing the Least Recently Used (LRU) or Least Frequently Used (LFU) elements, whether instructions or data, are relevant for use. It should be noted that in the paging cache buffer, the above replacement policies can also be used to replace address information. The pseudo LRU (PLRU) policy introduces replacing based on approximate information about the age of the elements in the cache memory. The hardware implementation of any replacement policy algorithm is the circuit. This hardware part of the processor core has certain characteristics: the latency of the search process for a candidate element for replacement, the gate complexity, and the reliability. The characteristics of the PLRUt and PLRUm replacement policies are synthesized and investigated. Both are the varieties of the PLRU replacement policy, which is close to the LRU policy in terms of the percentage of cache hits. In the current study, the hardware implementation of these policies is evaluated, and the possibility of adaptation to each of the policies in the processor core according to a selected priority characteristic is analyzed. The dependency of the rise in the delay and gate complexity in the case of an increase in the associativity of the cache memory is shown. The advantage of the hardware implementation of the PLRUt algorithm in comparison with the PLRUm algorithm for higher values of associativity is shown.
APA, Harvard, Vancouver, ISO, and other styles
10

Kadlimatti, Pratik Kamalappa, and Dr Uma B V. "Design and Optimization of 4-way set Associative Mapped Cache Controller." International Journal for Research in Applied Science and Engineering Technology 11, no. 8 (2023): 1948–56. http://dx.doi.org/10.22214/ijraset.2023.55430.

Full text
Abstract:
Abstract: In the realm of modern computer systems, the 4-way set associative mapped cache controller emerges as a cornerstone, revolutionizing memory access efficiency. This exploration delves into its core principles, revealing its pivotal role in synchronizing rapid CPUs with slower main memory. By orchestrating seamless data exchange and employing intelligent replacement policies, this controller optimizes performance. Embarking on practical realization, a non-pipelined processor materializes using Xilinx Vivado and Verilog HDL, propelling frequent memory read/write requests for the 4-way set associative mapped cache. The quest for efficiency fuels refinements, culminating in an optimized cache controller design. Rigorously validated within the Xilinx Vivado environment, the architecture demonstrates tangible success with quantified outcomes. The design framework encompasses a 4K byte primary memory, complemented by a 1K byte 4-way set associative cache. This setting scrutinizes the optimized cache controller's efficacy. The dedicated test module, housing a suite of instructions, underscores its performance. Remarkably, the evaluation showcases 19 cache hits and 6 cache misses, revealing the potency of the optimized design in minimizing cache misses, particularly in call and jump instructions, an essential stride towards enhanced memory efficiency.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Cache Hits"

1

Chandran, Varadharajan. "Robust Method to Deduce Cache and TLB Characteristics." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1308256764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wooster, Roland Peter. "Optimizing Response Time, Rather than Hit Rates, of WWW Proxy Caches." Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/36638.

Full text
Abstract:
This thesis investigates the possibility of improving World Wide Web (WWW) proxy cache performance. Most published research on proxy caches is concerned only with improving the cache hit rate. Improving only the hit rate, however, ignores the actual retrieval times experienced by WWW browser users. This research investigates removal algorithms that consider the time to download a file as a factor. Our experiments show that a removal algorithm that minimizes only the download time yields poor results. However, a new algorithm is investigated that does provide improved performance over common removal algorithms using three factors --- the speed at which a file is downloaded, the size of the file, and the number of references to the file (the number of hits). Experiments are conducted with a modified version of the Harvest Cache which has been made available on the Internet from the Virginia Tech Network Research Group's (VT-NRG) home page. WWW traffic from the ".edu" domain is used in all of the experiments. Five different removal algorithms are compared: least recently used, least frequently used, document size, and two new algorithms. The results indicate that the new three factor algorithm reduces the average latency experienced by users.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Velička, Tomáš. "Statistická data v nemocničním informačním systému." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-218219.

Full text
Abstract:
The aim of this work was the evaluation of some statistic indicators from the data obtained from the Health Informatic System Clinicom. The data used in this informatic system are administrated by the database system Caché. The data from the database which is administrated by Departement of Biomedical Engineering in Brno, have been also used. Data mentioned in this work were mined from the database by SQL queries. The statistic indicators obtained this way were needed to be presented on internet. The user interface for this presentation was designed and realized in programming language CSP. The resulting statistic indicators were split up into 3 basic groups: Hospitalization, Mandatory reporting and Medicament requirement. Besides the statistic indicators some pacients‘ data and the graphs of results are shown in the user interface. The part which has also been used in this work is the concept of the online form for reporting of cancers.
APA, Harvard, Vancouver, ISO, and other styles
4

Dantas, Jos? da Paz. "Um brinde ? cacha?a: o patrim?nio hist?ricocultural e seus usos tur?sticos nos alambiques do Rio Grande do Norte." Universidade Federal do Rio Grande do Norte, 2016. http://repositorio.ufrn.br/handle/123456789/21107.

Full text
Abstract:
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2016-07-25T23:41:59Z No. of bitstreams: 1 JoseDaPazDantas_DISSERT.pdf: 26523328 bytes, checksum: 2e5ae8ad0ecc2fd36f6f0aeb4ddaff6c (MD5)<br>Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-08-05T22:08:50Z (GMT) No. of bitstreams: 1 JoseDaPazDantas_DISSERT.pdf: 26523328 bytes, checksum: 2e5ae8ad0ecc2fd36f6f0aeb4ddaff6c (MD5)<br>Made available in DSpace on 2016-08-05T22:08:50Z (GMT). No. of bitstreams: 1 JoseDaPazDantas_DISSERT.pdf: 26523328 bytes, checksum: 2e5ae8ad0ecc2fd36f6f0aeb4ddaff6c (MD5) Previous issue date: 2016-02-25<br>O estudo prop?e apresentar uma discuss?o sobre o valor da cacha?a enquanto patrim?nio cultural e sua rela??o com as atividades tur?sticas no Estado do Rio Grande do Norte. Apoiado numa discuss?o a partir dos estudos hist?ricos relacionados ao setor tur?stico, especialmente no que diz respeito ? gastronomia, a cacha?a se define n?o apenas como um importante instrumento para a constru??o de identidades, mas, tamb?m, como um elemento capaz de tecer rela??es sociais, pol?ticas e econ?micas. Isto ?, um produto tur?stico capaz de proporcionar novos destinos. A pesquisa foi realizada em cinco alambiques localizados nas regi?es Leste Potiguar e Serid?, as quais oferecem uma historicidade e produ??o significativa para o Estado do Rio Grande do Norte. Para definir o recorte, ? preciso buscar refinar essa sele??o, especialmente ao lidar com fatos hist?ricos e mem?rias dos propriet?rios desses estabelecimentos. Dessa forma, torna-se necess?rio considerar a trajet?ria dos alambiques, per?odos de maiores produ??es e inser??o no mercado tur?stico. Atrav?s de levantamentos documentais e observa??es feitas em visitas de campo, a pesquisa tem uma abordagem qualitativa com finalidade descritiva e explorat?ria, metodologia que nos permite abordar quest?es em torno das articula??es entre patrim?nio, identidade e turismo.<br>The study proposes to present a discussion about the value of cacha?a as a cultural heritage and its relation with the touristic activities in the State of Rio Grande do Norte. Based on a discussion originated from historic studies related to the touristic sector, especially when it comes to gastronomy, cacha?a is defined not only as an important instrument to the construction of identities, but, also, as an element able to weave social, political and economic relations. Therefore, a touristic product that can provide new destinations. The research was made through five alembics located on the East and Serid? regions of Rio Grande do Norte. To define the focus, it is necessary to refine this selection, especially when dealing with historical facts and memories from these establishment owners. Thus, it becomes necessary to consider the alembic trajectory, periods of larger productions and insertion in the touristic market. Through documental data collection and observations made in field tours, the research has a qualitative approach with descriptive and exploratory goal, methodology that allows us to approach issues around articulations among heritage, identity and tourism. "It was concluded that the state of Rio Grande do Norte has many tourist routes where handmade cacha?a produced in the State can be inserted in any of these routes due to its potential searched in the research, as well as, it is possible to work in other perspective when it comes to ways of protecting this patrimony and strategies of incentive of the activity for the development and appreciation of local economy.
APA, Harvard, Vancouver, ISO, and other styles
5

Tomic, David. "Service Aware Traffic Distribution in Heterogeneous A2G Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-248995.

Full text
Abstract:
Airplanes have different ways to connect to the ground, including satellite air-to-ground communication (SA2GC) and direct air-to-ground communication (DA2GC). Each connection/link offers a different varying amount of transmission capacity over flight time. The traffic generated in the airplane must be forwarded/sent to ground over the available links. It is however not clear how the traffic should be forwarded so that traffic quality of service (QoS) requirements are met. The thesis at hand considers this question, and implements an algorithm handling the forwarding decision with three different forwarding schemes. Those consider traffic parameters in calculating a value assigned to each traffic flow, over a combination of priority, delay requirement and the number of times a traffic flow is dropped. The forwarding algorithm relies on proposed in-flight broadband connectivity (IFBC) network traffic and air-to-ground (A2G) link models, which aim at approximating the network environment of future IFBC networks. It is shown that QoS requirements of traffic flows in terms of packet loss and delay cannot be satisfied with capacities offered by current DA2GC and SA2GC technology. For a future scenario, with higher assumed link capacities, the QoS requirements are met to a higher extent. This is shown in lower packet loss and delay experienced by the respective traffic flows. Further, it is shown that the performance can be improved with specific forwarding schemes used by the forwarding algorithm. It is also investigated how a web cache can be used as a fallback technology. For this a required web cache hit rate is found, which should be high enough to offload the network with content served from the cache. Overall, the thesis aims at proposing an efficient traffic forwarding technique, and at giving insight into an alternative if this technique fails.<br>Flygplan har olika sätt att ansluta till marken, inklusive satellit-mark-kommunikation (SA2GC) och direkt luft till markkommunikation (DA2GC). Varje anslutning/länk erbjuder en annan varierande mängd överföringskapacitet under flygtid. Den trafik som genereras i flygplanet måste vidarebefordras/skickas till marken över de tillgängliga länkarna. Det är emellertid inte klart hur trafiken ska vidarebefordras så att trafiksäkerhetskvaliteten (QoS) uppfylls. Avhandlingen handlar om denna fråga och implementerar en algoritm som hanterar vidarebefordringsbeslutet med tre olika vidarebefordringssystem. De betraktar trafikparametrar vid beräkning av ett värde som tilldelas varje trafikflöde, över en kombination av prioritet, fördröjningskrav och antalet gånger ett trafikflöde tappas. Vidarebefordringsalgoritmen är beroende av föreslagna bredbandsförbindelser (IFBC) i nätverk och A2G-länkmodeller, som syftar till att approximera nätverksmiljön för framtida IFBC-nätverk. Det visas att QoS-krav på trafikflöden när det gäller paketförlust och fördröjning inte kan tillgodoses med kapacitet som erbjuds av nuvarande DA2GC- och SA2GC-teknik. För ett framtida scenario, med högre antagna länkkapacitet, uppfylls QoS-kraven i högre utsträckning. Detta visas med lägre paketförlust och fördröjning som upplevs av respektive trafikflöden. Vidare är det visat att prestanda kan förbättras med specifika vidarekopplingsscheman som används av vidarebefordringsalgoritmen. Det undersöks också hur en webbcache kan användas som en återgångsteknik. För detta hittas en obligatorisk webbcache-träfffrekvens, som bör vara tillräckligt hög för att ladda upp nätverket med innehåll som serveras från cacheminnet. Sammanfattningsvis syftar uppsatsen till att föreslå en effektiv trafiköverföringsteknik och att ge insikt om ett alternativ om denna teknik misslyckas.
APA, Harvard, Vancouver, ISO, and other styles
6

Holub, Martin. "Datový standard zdravotnických informačních systémů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218778.

Full text
Abstract:
The diploma thesis "Data Standard of the Health Information Systems" deals with the structure of data standard including XML language. Further the thesis is focusing on hospital information system CLINICOM with the inclusion of the access to data records from its database Caché company InterSystems. Description of the National registers of health is also included.The concept of web application is also one part of the thesis which enables to generate reports in the data standard of the Ministry of Healt of the Czech Republic for mandatory reporting by NHIS registers in the time interval from the Caché database. Found records from the database can be printed for archiving. The work also deals with security access to the HIS server and data communication between health organizations.
APA, Harvard, Vancouver, ISO, and other styles
7

Patil, Adarsh. "Heterogeneity Aware Shared DRAM Cache for Integrated Heterogeneous Architectures." Thesis, 2017. http://etd.iisc.ac.in/handle/2005/4124.

Full text
Abstract:
Integrated Heterogeneous System (IHS) processors pack throughput-oriented GPGPUs along-side latency-oriented CPUs on the same die sharing certain resources, e.g., shared last level cache, network-on-chip (NoC), and the main memory. They also share virtual and physical address spaces and unify the memory hierarchy. The IHS architecture allows for easier programmability, data management and efficiency. However, the significant disparity in the demands for memory and other shared resources between the GPU cores and CPU cores poses significant problems in exploiting the full potential of this architecture. In this work, we propose adding a large capacity stacked DRAM, used as a shared last level cache, for the IHS processors. The reduced latency of access and large bandwidth provided by the DRAM cache can help improve performance respectively of CPU and GPGPU while the large capacity can help contain the working set of the IHS workloads. However, adding the DRAM cache naively leaves significant performance on the table due to the disparate demands from CPU and GPU cores for DRAM cache and memory accesses. In particular, the imbalance can significantly reduce the performance benefits that the CPU cores would have otherwise enjoyed with the introduction of the DRAM cache. This necessitates a heterogeneity-aware management of this shared resource for improved performance. To address this, in this thesis, we propose three simple techniques to enhance the performance of CPU application while ensuring very little or no performance impact to the GPU. Specifically, we propose (i) PrIS, a prioritization scheme for scheduling CPU requests at the DRAM cache controller, (ii) ByE, a selective and temporal bypassing scheme for CPU requests at the DRAM cache and (iii) Chaining, an occupancy controlling mechanism for GPU lines in the DRAM cache through pseudoassociativity. The resulting cache, HAShCache, is heterogeneity-aware and can adapt dynamically to address the inherent disparity of demands in an IHS architecture with simple light weight schemes. We enhance the gem5-gpu simulator to model an IHS architecture with stacked DRAM as a cache, coherent GPU L2 cache and CPU caches and a shared unified physical memory. Using this setup we perform detailed experimental evaluation of the proposed HAShCache and demonstrate an average system performance (combined performance of CPU and GPU cores) improvement of 41% over a naive DRAM cache and over 100% improvement over a baseline system with no stacked DRAM cache.
APA, Harvard, Vancouver, ISO, and other styles
8

Hsieh, Kai-Chung, and 謝凱仲. "Adaptive Cache Replacement Policies to Increase Hit Rate of." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/y2b8ky.

Full text
Abstract:
碩士<br>國立交通大學<br>資訊科學與工程研究所<br>102<br>Load-to-use latency is one of the key factors to influence microprocessor performance. Because load instruction usually has long execution latency and take large portion of total instructions at run time. Early execute load instructions is a way to reduce the load-to-use latency. To early execute load instruction, the beginning issue is to how determine the effective address of the load instruction early. To resolve this issue, several mechanisms use a special storage to keep the values of the registers which are used recently as base or index components of effective address for the effective address calculation. The mechanisms on the calculation can be speculative or non-speculative. To have the benefits of low hardware requirement for the special storage and the non-speculative mechanisms do not need recovering method for effective address calculation, we proposed a new mechanism for early executing load instructions composed of the previous mechanisms on reducing the load-to-use latency The new mechanism uses a small cache to keep the values of the registers used recently as base component of effective address; the cache is called Base Register Cache (BRC). In original mechanism, BRC is managed by the LRU (Least Recently Used) replacement policies. In most of time, LRU performs well, but sometimes LRU will inefficiently use cache space if recencies of most references are greater than the cache size (the references are called far reuses); A solution is to retain some registers long enough in the cache to contribute cache hits, the LFU (Least Frequently Used) replacement policy was proved that performing optimally in this reference characteristic. This thesis focuses on how to make the cooperation between the LRU and the LFU policies for increasing the hit rate of BRC.  We found an efficient technique used on memory cache, called Combined the LRU and LFU Policies (CRFP) which is to adaptively select LRU and LFU policies, and tried to apply it on the BRC. But we observed that CRFP still has the same hit rate as LRU even LFU has a higher hit rate on some benchmarks. So we proposed an analysis method (Mis-matched selection analysis) to find the reasons why the CRFP is not well adapted to the BRC, and also proposed methods to make CRFP be well adapted to BRC. The final evolution of our proposed mechanisms (SCRFP-SC-TagAA) has the highest average hit rate improvement on the baseline 4-entry fully-associate BRC: 1.82% to LRU and 1.1% to CRFP. On individual benchmarks, SCRFP-SC-TagAA also the highest hit rate improvement comparing to each evolution of our proposed mechanisms. Especially in the benchmark group which LRU has low hit rates, SCRFP-SC-TagAA improves the hit rate of the policy which has higher hit rate (LRU or LFU) up to 11.48% and 5.45% on the average hit rate of this benchmark group. We also estimate the effectiveness of our proposed mechanisms in terms of the ratio of the average hit rate and the hardware overhead (cost-performance ratio). The third evolution of our proposed mechanisms (SCRFP) has 5.45% more than CRFP on the cost-performance ratio.
APA, Harvard, Vancouver, ISO, and other styles
9

TOPO, RENU. "IMPLEMENTATION OF SINGLE CORE L1 L2 CACHE WITH THE COMPARISON OF READ POLICIES USING HDL." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14611.

Full text
Abstract:
ABSTRACT This thesis focuses on the comparison of three different replacement policies for Cache memory by using hit rate as the performance criteria. Random, LRU and Pseudo-LRU. It involves the implementation of 2-way, 4-way and 8-way Set Associative mapping technique for varying cache sizes. For the sake of comparison a 4KB ,8KB ,16KB ,32KB, 64KB,cache memory is taken as base on which the policies are executed. The implementation of the memory controller and the required glue logic is carried out. Test bench is written for simulating the input signals to the memory controllers. At the next level, a higher capacity L2 cache memory is considered and the same process is repeated to estimate the performance with respect to Hit Rate. Verilog language is used for the hardware implementation. Memory controllers for the Main Memory, L1 Cache and L2 Cache are realized using Verilog language. As we double the Way of cache (2-Way to 4-Way & 4-Way to 8-Way) the performance increases in general, but the percentage increase is not same. 2-Way to 4-Way increased by~3.5%, 4-Way to 8-Way increased by ~0.2%. It implies that we won’t get equal performance increment on doubling the Way-Size of cache. To find the optimized Way-Size we have to strike a balance between the Cache-Size and Hit-Rate. In our case optimized Way-Size is 4. Performance increment also depends upon the program in execution. So the optimized Way-Size can be different for different program.
APA, Harvard, Vancouver, ISO, and other styles
10

Jendra, Paul, and 張仁寶. "Improving DRAM Cache Hit Rate and Performance via Adaptive Granularity Block Size Management." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/5y6v3u.

Full text
Abstract:
碩士<br>國立交通大學<br>資訊科學與工程研究所<br>104<br>Hit rate and access latency are the two most crucial factors that determine the performance of on-chip stacked DRAM. Various fixed granularity DRAM cache tag design management has been proposed to improve the overall stacked DRAM performance. However, the small yet fixed data block size incurs high tag storage and failed to exploit spatial granularity. On the other hand, coarsely grained stacked DRAM offers higher hit rate, at the cost of high bandwidth wastage due to the fetching of unused blocks. We propose an adaptive granularity DRAM cache block management to gain the benefit of both small and coarse granularity stacked DRAM, hence reduce the disadvantages of both designs. Our design not only reduce the tag storage size, but also improve the hit rate over 25%. The bandwidth wastage reduced as well as the decreasing miss rate. We added block prefetching mechanism on top of our design to further optimize the overall system performance. Our experiment result shows that block prefetching achieves 32% higher hit rate compared to the fixed granularity designs. Moreover, we achieve averagely 45% and 7% performance gain improvement in terms of reduced miss penalty over state-of-art fixed granularity DRAM cache and the ideal tags-in-SRAM design, respectively.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Cache Hits"

1

Wheatley, Catherine. Caché (Hidden). 2nd ed. Bloomsbury Publishing Plc, 2020. https://doi.org/10.5040/9781838719579.

Full text
Abstract:
Ever since its world premiere at the Cannes film festival in May 2005, audiences have been talking about Michael Haneke’s Caché. The film’s enigmatic and multi-layered narrative leaves its viewers with many more questions than answers. The plot revolves around the mystery of who is sending a series of sinister videos and drawings to Georges Laurent (Daniel Auteuil), the presenter of a literary talkshow. As Georges becomes increasingly secretive, much to the distress of his wife Anne (Juliette Binoche), a culprit fails to surface. And even at the film’s end, audiences are left struggling to make sense of what has gone before. This hasn’t stopped people trying. As Catherine Wheatley examines, a wealth of critical writing surrounds Caché, with various explanations having been offered as to what the film is ‘really’ about. In an in-depth and illuminating account, Wheatley examines the key themes at the heart of the ‘meaning’ of Caché: the film as thriller; post-colonial bourgeois guilt; political accountability and lastly, reality, the media and its audiences, tracing these strands through the film by means of close readings of individual scenes and moments. Inspired by the director’s claim that we might understand the film as a set of Russian dolls, each of which is complete in itself but together forms a whole in which layers of unseen depth are concealed, Wheatley avoids a single, unifying approach to understanding Caché. Instead, her detailed analysis of the film’s shifting perspectives opens up the multiplicity of meanings that Caché contains, in order to understand its secrets.
APA, Harvard, Vancouver, ISO, and other styles
2

Hulme, Peter. The Dinner at Gonfarone's. Liverpool University Press, 2019. http://dx.doi.org/10.3828/liverpool/9781786942005.001.0001.

Full text
Abstract:
The Dinner at Gonfarone’s is organised as a partial biography, covering five years in the life of the young Nicaraguan poet, Salomón de la Selva, but it also offers a literary geography of Hispanic New York (Nueva York) in the turbulent years around the First World War. De la Selva is of interest because he stands as the largely unacknowledged precursor of Latino writers like Junot Díaz and Julia Álvarez, writing the first book of poetry in English by an Hispanic author. In addition, through what he called his pan-American project, de la Selva brought together in New York writers from all over the American continent. He put the idea of trans-American literature into practice long before the concept was articulated. De la Selva’s range of contacts was enormous, and this book has been made possible through discovery of caches of letters that he wrote to famous writers of the day, such as Edwin Markham and Amy Lowell, and especially Edna St Vincent Millay. Alongside de la Selva’s own poetry – his book Tropical Town (1918) and a previously unknown 1916 manuscript collection – The Dinner at Gonfarone’s highlights other Hispanic writing about New York in these years by poets such as Rubén Darío, José Santos Chocano, and Juan Ramón Jiménez, all of whom were part of de la Selva’s extensive network.
APA, Harvard, Vancouver, ISO, and other styles
3

Walker, Elsie. Hearing Haneke. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190495909.001.0001.

Full text
Abstract:
Haneke’s films are sonically charged experiences of disturbance, desperation, grief, and many forms of violence. They are unsoftened by music, punctuated by accosting noises, shaped by painful silences, and defined by aggressive dialogue. Haneke is among the most celebrated of living auteurs: he is two-time receipt of the Palme d’Or at Cannes Film Festival (for The White Ribbon [2009] and Amour [2012]), and Academy Award winner of Best Foreign Language Film (for Amour), among numerous other awards. The radical confrontationality of his cinema makes him a most controversial, as well as revered, subject. Hearing Haneke is the first book-length study of the sound tracks that define his living legacy as an aural auteur. Hearing Haneke provides close sonic analyses of The Seventh Continent, Funny Games Code Unknown, The Piano Teacher, Caché, The White Ribbon, and Amour. The book includes several sustained theoretical approaches to film sound: including postcolonialism, feminism, genre studies, psychoanalysis, adaptation studies, and auteur theory. From these various theoretical angles, Hearing Haneke shows that the director consistently uses all aural elements (sound effects, dialogue, silences, and music) to inspire our humane understanding. He expresses faith in us to hear the pain of his characters’ worlds most actively, and hence our own more clearly. This has profound social and personal significance: for if we can hear everything better, this entails a new awareness of the “noise” we make in the world at large. Hearing Haneke will resonate for anyone interested in the power of art to inspire progressive change.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Cache Hits"

1

Fang, Bin, Yibo Zhong, Yizhen Sun, Jinjin Tu, Guang Jiang, and Yao Xiao. "A Cache Scheduling Method Based on Adaptive Expiration for Data Process System." In Lecture Notes in Electrical Engineering. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-2409-6_5.

Full text
Abstract:
Abstract Cache data plays a crucial role in the operation of the power system. In practical applications, the demand for retrieval and queries has surged due to the vast amount of terminal data collected throughout the province. Placing data in the cache effectively improves query speed [1]. However, cache space is limited, and the amount of storable data is small [2]. Without proper control over the data placed in the cache, the hit rate may decrease [3]. Thus this paper proposes a new control method through adaptively setting the expiration time of cached data, which considers the historical traffic distribution and real-time query distribution. The simulation results show that the proposed method can significantly reduce the cache hit rate, which leads to the improvement of cache utilization. In addition, it avoids frequent cache replacements and reduces I/O overhead.
APA, Harvard, Vancouver, ISO, and other styles
2

Kanazawa, Akari, and Tetsuya Shigeyasu. "A New Method for Improving Cache Hit Ratio by Utilizing Near Network Cache on NDN." In Innovative Mobile and Internet Services in Ubiquitous Computing. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-35836-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tao, Jie, Dominic Hillenbrand, and Holger Marten. "Instruction Hints for Super Efficient Data Caches." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01973-9_76.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guilloud, Simon, Mario Bucev, Dragana Milovančević, and Viktor Kunčak. "Formula Normalizations in Verification." In Computer Aided Verification. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37709-9_19.

Full text
Abstract:
AbstractWe apply and evaluate polynomial-time algorithms to compute two different normal forms of propositional formulas arising in verification. One of the normal form algorithms is presented for the first time. The algorithms compute normal forms and solve the word problem for two different subtheories of Boolean algebra: orthocomplemented bisemilattice (OCBSL) and ortholattice (OL). Equality of normal forms decides the word problem and is a sufficient (but not necessary) check for equivalence of propositional formulas. Our first contribution is a quadratic-time OL normal form algorithm, which induces a coarser equivalence than the OCBSL normal form and is thus a more precise approximation of propositional equivalence. The algorithm is efficient even when the input formula is represented as a directed acyclic graph. Our second contribution is the evaluation of OCBSL and OL normal forms as part of a verification condition cache of the Stainless verifier for Scala. The results show that both normalization algorithms substantially increase the cache hit ratio and improve the ability to prove verification conditions by simplification alone. To gain further insights, we also compare the algorithms on hardware circuit benchmarks, showing that normalization reduces circuit size and works well in the presence of sharing.
APA, Harvard, Vancouver, ISO, and other styles
5

Aoki, Miho, and Tetsuya Shigeyasu. "A Method for Improving Cache Hit Ratio by Virtual Capacity Multiplication." In Advances in Network-Based Information Systems. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-65521-5_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nakata, Yuya, and Tetsuya Shigeyasu. "A New Contents Migration Method for Reducing Network Traffic and Improving Cache Hit Rate on NDN." In Lecture Notes on Data Engineering and Communications Technologies. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-02613-4_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Goto, Kunio, and Hirona Amano. "Users’ WWW Access Statistics Measured at Proxy Servers: Case Study for Cache Hit Ratio and Response Time." In Performance and QoS of Next Generation Networking. Springer London, 2001. http://dx.doi.org/10.1007/978-1-4471-0705-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Frazier, Jessica Roberts. "Renegade Style." In On Style. punctum books, 2013. https://doi.org/10.21983/p3.0055.0.09.

Full text
Abstract:
In Act 1, Scene 3 of Philip Massinger’s The Renegado (1624), the Turkish princess Donusabrowses through the goods on display at the Tunisian market shop of the Venetian Vitelli. As Jonathan Gil Harris has noted in Sick Economies, such luxury outlets, with their attendant branding, had become almost set pieces of the stage by the early seventeenth century.1 Like runway spec-tators at Fashion Week, Renaissance theater-goers received previews of the latest trends in everything from tobacco paraphernalia to fea-thers. And Vitelli’s inventory, with Venetian mirrors and glass, certainly would have proffered a bit of caché, as Harris suggests.2 But Vitelli employs an unanticipated marketing strategy, coupling his offerings with classical imagery. He hawks his goods through the promise of their similitude to the décor of Greek gods: “Here crystal glasses, such as Ganymede / Did fill with nectar to the Thunderer / When he drank to Alcides” (1.3.116–118).3 As a result, Vitelli’s “looking-glass” (1.3.108) and “crystal” take on a patina of antiquity rather than the gleam of novelty.
APA, Harvard, Vancouver, ISO, and other styles
9

Yates, David J., and Jennifer Xu. "Sensor Field Resource Management for Sensor Network Data Mining." In Intelligent Techniques for Warehousing and Mining Sensor Network Data. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-328-9.ch013.

Full text
Abstract:
This research is motivated by data mining for wireless sensor network applications. The authors consider applications where data is acquired in real-time, and thus data mining is performed on live streams of data rather than on stored databases. One challenge in supporting such applications is that sensor node power is a precious resource that needs to be managed as such. To conserve energy in the sensor field, the authors propose and evaluate several approaches to acquiring, and then caching data in a sensor field data server. The authors show that for true real-time applications, for which response time dictates data quality, policies that emulate cache hits by computing and returning approximate values for sensor data yield a simultaneous quality improvement and cost saving. This “win-win” is because when data acquisition response time is sufficiently important, the decrease in resource consumption and increase in data quality achieved by using approximate values outweighs the negative impact on data accuracy due to the approximation. In contrast, when data accuracy drives quality, a linear trade-off between resource consumption and data accuracy emerges. The authors then identify caching and lookup policies for which the sensor field query rate is bounded when servicing an arbitrary workload of user queries. This upper bound is achieved by having multiple user queries share the cost of a sensor field query. Finally, the authors discuss the challenges facing sensor network data mining applications in terms of data collection, warehousing, and mining techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

Grubb, Thomas C. "Nutritional consequences of self-cached food." In Ptilochronology. Oxford University PressOxford, 2006. http://dx.doi.org/10.1093/oso/9780199295500.003.0007.

Full text
Abstract:
Abstract In the temperate and boreal zones, animals from honey bees to beavers store food during the growing season for use during the winter when food is non-renewing. Birds are among the better-studied examples of such food-hoarders. During the last several decades, research has focused on two aspects of caching. The first line of work has searched for the mechanisms behind some birds’ apparently extraordinary ability to remember the location of and retrieve hundreds or even thousands of individual caches they have previously sequestered within their homerange. (Russell Balda, a pioneer of this line of research, once determined that Clark’s nutcrackers, his study animal, could remember where they had cached food considerably better than could his graduate students!) The second line of investigation has focused on the adaptive value, the fitness benefits, of caching and it is to this second line that ptilochronology has made a contribution. The four studies detailed below all demonstrate, through increased rate of induced feather growth, that birds derive a nutritional benefit from retrieving food items they have previously cached.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Cache Hits"

1

Park, Chang Hyun, Ilias Vougioukas, Andreas Sandberg, and David Black-Schaffer. "Every walk’s a hit: making page walks single-access cache hits." In ASPLOS '22: 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems. ACM, 2022. http://dx.doi.org/10.1145/3503222.3507718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Appuswamy, Raja, David C. van Moolenbroek, and Andrew S. Tanenbaum. "Cache, cache everywhere, flushing all hits down the sink: On exclusivity in multilevel, hybrid caches." In 2013 IEEE 29th Symposium on Mass Storage Systems and Technologies (MSST). IEEE, 2013. http://dx.doi.org/10.1109/msst.2013.6558445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Holt, Alan. "Do disk drives dream of buffer cache hits?" In the conference. ACM Press, 1994. http://dx.doi.org/10.1145/199544.199614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mosquera, Fernando, Krishna Kavi, Gayatri Mehta, and Lizy K. John. "Guard Cache: Creating False Cache Hits and Misses To Mitigate Side-Channel Attacks." In 2023 Silicon Valley Cybersecurity Conference (SVCC). IEEE, 2023. http://dx.doi.org/10.1109/svcc56964.2023.10165527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mahara, Arpan, Jose Fuentes, Christian Poellabauer, and Naphtali D. Rishe. "STRCacheML: A Machine Learning-Assisted Content Caching Policy for Streaming Services." In 5th International Conference on Artificial Intelligence and Big Data. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.140409.

Full text
Abstract:
Content caching is vital for enhancing web server efficiency and reducing network congestion, particularly in platforms predicting user actions. Despite many studies conducted toimprove cache replacement strategies, there remains space for improvement. This paper introduces STRCacheML, a Machine Learning (ML) assisted Content Caching Policy. STRCacheML leverages available attributes within a platform to make intelligent cache replacement decisions offline. We have tested various Machine Learning and Deep Learning algorithms to adapt the one with the highest accuracy; we have integrated that algorithm into our cache replacement policy. This selected ML algorithm was employed to estimate the likelihood of cache objects being requested again, an essential factor in cache eviction scenarios. The IMDb dataset, constituting numerous videos with corresponding attributes, was utilized to conduct our experiment. The experimental section highlights our model’s efficacy, presenting comparative results compared to the established approaches based on raw cache hits and cache hit rates.
APA, Harvard, Vancouver, ISO, and other styles
6

Polyakov, V. R., and M. R. Pastukhov. "INVESTIGATION OF THE EFFECTIVENESS OF CACHE COMPRESSION IN CENTRAL PROCESSORS USING THE BASE-DELTA-IMMEDIATE." In Actual problems of physical and functional electronics. Ulyanovsk State Technical University, 2024. http://dx.doi.org/10.61527/appfe-2024.70.

Full text
Abstract:
This paper presents a design in the SystemVerilog hardware description language implementing a system consisting of a cache memory of 32 KB, cache controller, main RAM, compressor and decompressor. The system uses its own implementation of the Base-Delta-Immediate compression algorithm with the logic of storing, reading and writing compressed data in the cache. The results are obtained in the form of cache hits and compression ratio.
APA, Harvard, Vancouver, ISO, and other styles
7

Hines, Stephen, David Whalley, and Gary Tyson. "Guaranteeing Hits to Improve the Efficiency of a Small Instruction Cache." In 40th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO 2007). IEEE, 2007. http://dx.doi.org/10.1109/micro.2007.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hines, Stephen, David Whalley, and Gary Tyson. "Guaranteeing Hits to Improve the Efficiency of a Small Instruction Cache." In 40th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO 2007). IEEE, 2007. http://dx.doi.org/10.1109/micro.2007.4408274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Xi E., and Tor M. Aamodt. "Hybrid analytical modeling of pending cache hits, data prefetching, and MSHRs." In 2008 41st IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2008. http://dx.doi.org/10.1109/micro.2008.4771779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sermpezis, Pavlos, Thrasyvoulos Spyropoulos, Luigi Vigneri, and Theodoros Giannakas. "Femto-Caching with Soft Cache Hits: Improving Performance with Related Content Recommendation." In 2017 IEEE Global Communications Conference (GLOBECOM 2017). IEEE, 2017. http://dx.doi.org/10.1109/glocom.2017.8254035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography