Academic literature on the topic 'Predictive prefetch'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Predictive prefetch.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Predictive prefetch"

1

T., R. Gopalakrishnan Nair, and Jayarekha P. "Strategic Prefetching of VoD Programs Based on ART2 driven Request Clustering." International Journal of Information Sciences and Techniques (IJIST) 1, no. 2 (2011): 13–21. https://doi.org/10.5281/zenodo.8280938.

Full text
Abstract:
In this paper we present a novel neural architecture to classify various types of VoD request arrival pattern using an unsupervised clustering Adaptive Resonance Theory 2 (ART2). The knowledge extracted from the ART2 clusters is used to prefetch the multimedia objects into the proxy server’s cache, from the disk and prepare the system to serve the clients more efficiently before the user’s arrival of the request. This approach adapts to changes in user request patterns over a period by storing the previous information. Each cluster is represented as prototype vector by generalizing the most frequently used video blocks that are accessed by all the cluster members. The simulation results of the proposed clustering and prefetching algorithm shows a significant increase in the performance of streaming server. The proposed algorithm helps the server’s agent to learn user preferences and discover the information about the corresponding videos. These videos can be prefetched to the cache and identify those videos for the users who demand it.
APA, Harvard, Vancouver, ISO, and other styles
2

Alves, Ricardo, Stefanos Kaxiras, and David Black-Schaffer. "Early Address Prediction." ACM Transactions on Architecture and Code Optimization 18, no. 3 (2021): 1–22. http://dx.doi.org/10.1145/3458883.

Full text
Abstract:
Achieving low load-to-use latency with low energy and storage overheads is critical for performance. Existing techniques either prefetch into the pipeline (via address prediction and validation) or provide data reuse in the pipeline (via register sharing or L0 caches). These techniques provide a range of tradeoffs between latency, reuse, and overhead. In this work, we present a pipeline prefetching technique that achieves state-of-the-art performance and data reuse without additional data storage, data movement, or validation overheads by adding address tags to the register file. Our addition of register file tags allows us to forward (reuse) load data from the register file with no additional data movement, keep the data alive in the register file beyond the instruction’s lifetime to increase temporal reuse, and coalesce prefetch requests to achieve spatial reuse. Further, we show that we can use the existing memory order violation detection hardware to validate prefetches and data forwards without additional overhead. Our design achieves the performance of existing pipeline prefetching while also forwarding 32% of the loads from the register file (compared to 15% in state-of-the-art register sharing), delivering a 16% reduction in L1 dynamic energy (1.6% total processor energy), with an area overhead of less than 0.5%.
APA, Harvard, Vancouver, ISO, and other styles
3

Jain, Puneet, Justin Manweiler, Arup Acharya, and Romit Roy Choudhury. "Scalable Social Analytics for Live Viral Event Prediction." Proceedings of the International AAAI Conference on Web and Social Media 8, no. 1 (2014): 226–35. http://dx.doi.org/10.1609/icwsm.v8i1.14504.

Full text
Abstract:
Large-scale, predictive social analytics have proven effective. Over the last decade, research and industrial efforts have understood the potential value of inferences based on online behavior analysis, sentiment mining, influence analysis, epidemic spread, etc. The majority of these efforts, however, are not yet designed with realtime responsiveness as a first-order requirement. Typical systems perform a post-mortem analysis on volumes of historical data and validate their “predictions” against already-occurred events.We observe that in many applications, real-time predictions are critical and delays of hours (and even minutes) can reduce their utility. As examples: political campaigns could react very quickly to a scandal spreading on Facebook; content distribution networks (CDNs) could prefetch videos that are predicted to soon go viral; online advertisement campaigns can be corrected to enhance consumer reception. This paper proposes CrowdCast, a cloud-based framework to enable real-time analysis and prediction from streaming social data. As an instantiation of this framework, we tune CrowdCast to observe Twitter tweets, and predict which YouTube videos are most likely to “go viral” in the near future. To this end, CrowdCast first applies online machine learning to map natural language tweets to a specific YouTube video. Then, tweets that indeed refer to videos are weighted by the perceived “influence” of the sender. Finally, the video’s spread is predicted through a sociological model, derived from the emerging structure of the graph over which the video-related tweets are (still) spreading. Combining metrics of influence and live structure, CrowdCast outputs sets of candidate videos, identified as likely to become viral in the next few hours. We monitor Twitter for more than 30 days, and find that CrowdCast’s real-time predictions demonstrate encouraging correlation with actual YouTube viewership in the near future.
APA, Harvard, Vancouver, ISO, and other styles
4

Panda, Biswabandan, and Shankar Balachandran. "Expert Prefetch Prediction: An Expert Predicting the Usefulness of Hardware Prefetchers." IEEE Computer Architecture Letters 15, no. 1 (2016): 13–16. http://dx.doi.org/10.1109/lca.2015.2428703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shyamala, K., and S. Kalaivani. "Application of Monte Carlo Search for Performance Improvement of Web Page Prediction." International Journal of Engineering & Technology 7, no. 3.4 (2018): 133. http://dx.doi.org/10.14419/ijet.v7i3.4.16761.

Full text
Abstract:
Prediction in web mining is one of the most complex tasks which will reduce web user latency. The main objective of this research work is to reduce web user latency by predicting and prefetching the users future request page. Web user activities were analyzed and monitored from the web server log file. The present work consists of two phases. In the first phase a directed graph is constructed for web user navigation with the reduction of repeated path. In the second phase, Monte Carlo search is applied on the constructed graph to predict the future request and prefetch the page. This work is successfully implemented and the prediction technique gives a better accuracy. This implementation paves a new way to prefetch the predicted pages at user end to reduce the user latency. Proposed Monte Carlo Prediction (MCP) Algorithm is compared with the existing algorithm Hidden Markov model. Proposed algorithm achieved better accuracy than the Hidden Markov Model. Accuracy is measured for the predicted web pages and achieved the optimal results.
APA, Harvard, Vancouver, ISO, and other styles
6

Jung, Sungmin, Hyeonmyeong Lee, and Heeseung Jo. "CluMP: Clustered Markov Chain for Storage I/O Prefetch." Electronics 12, no. 15 (2023): 3293. http://dx.doi.org/10.3390/electronics12153293.

Full text
Abstract:
Due to advancements in CPU and storage technologies, the processing speed of tasks has been increasing. However, there has been a relative slowdown in the data transfer speeds between disks and memory. Consequently, the issue of I/O processing speed has become a significant concern in I/O-intensive tasks. This research paper proposes CluMP, which predicts the next block to be requested within a process using a clustered Markov chain. Compared to the simple read-ahead approach commonly used in Linux systems, CluMP can predict prefetching more accurately and requires less memory for the prediction process. CluMP demonstrated a maximum memory hit ratio improvement of 191.41% in the KVM workload compared to read-ahead, as well as a maximum improvement of 130.81% in the Linux kernel build workload. Additionally, CluMP provides the advantage of adaptability to user objectives and utilized workloads by incorporating several parameters, thereby allowing for optimal performance across various workload patterns.
APA, Harvard, Vancouver, ISO, and other styles
7

Shang, Jing, Zhihui Wu, Zhiwen Xiao, Yifei Zhang, and Jibin Wang. "BERT4Cache: a bidirectional encoder representations for data prefetching in cache." PeerJ Computer Science 10 (August 29, 2024): e2258. http://dx.doi.org/10.7717/peerj-cs.2258.

Full text
Abstract:
Cache plays a crucial role in improving system response time, alleviating server pressure, and achieving load balancing in various aspects of modern information systems. The data prefetch and cache replacement algorithms are significant factors influencing caching performance. Due to the inability to learn user interests and preferences accurately, existing rule-based and data mining caching algorithms fail to capture the unique features of the user access behavior sequence, resulting in low cache hit rates. In this article, we introduce BERT4Cache, an end-to-end bidirectional Transformer model with attention for data prefetch in cache. BERT4Cache enhances cache hit rates and ultimately improves cache performance by predicting the user’s imminent future requested objects and prefetching them into the cache. In our thorough experiments, we show that BERT4Cache achieves superior results in hit rates and other metrics compared to generic reactive and advanced proactive caching strategies.
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Genghua, and Jia Wu. "Content caching based on mobility prediction and joint user Prefetch in Mobile edge networks." Peer-to-Peer Networking and Applications 13, no. 5 (2020): 1839–52. http://dx.doi.org/10.1007/s12083-020-00954-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hörbeck, E. A., L. Jonsson, E. Pålsson, and M. Landén. "More bipolar than bipolar disorder – a polygenic risk score analysis of postpartum psychosis." European Psychiatry 66, S1 (2023): S508—S509. http://dx.doi.org/10.1192/j.eurpsy.2023.1079.

Full text
Abstract:
IntroductionPostpartum psychosis is a rare psychiatric emergency, occurring days to weeks after 1-2 per 1000 deliveries. Its low prevalence makes it difficult to recruit enough participants to investigate the underlying pathophysiology. It is epidemiologically linked to bipolar disorder, which one study also found it to resemble in genetic susceptibility for psychiatric disorders (Di Florio et al. Lancet Psych 2021; 8: 1045–52).ObjectivesIn this study we aim to investigate polygenic liability for psychiatric disorders in two new Swedish postpartum psychosis cohorts.MethodsCases with postpartum psychosis, defined as a psychiatric hospitalization within 6 weeks after delivery, and/or receiving a diagnosis of F53.1 (ICD 10) or 294.40 (ICD 8.), parous women with severe mental illness without postpartum psychosis, and healthy parous controls were identified in two Swedish genetic studies: the Swedish bipolar collection (SWEBIC) and Predictors for ECT (PREFECT). Polygenic risk scores (PRS) were calculated from summary statistics from genome wide studies on bipolar disorder (Mullins et al. Nat Genet 2021; 53 817-829), schizophrenia (Trubetskoy et al. Nature 2022; 604 502-508) and major depression (Wray et al. Nat Genet. 2018; 50 668-681). The p-value thresholds best predicting their respective phenotype were used in logistic regression analyses with the first six principal components and genotyping platform as confounders.ResultsWe identified 176 patients with postpartum psychosis and genetic information (N(SWEBIC)=126, N(PREFECT)=50). Compared with healthy parous women, patients with postpartum psychosis had significantly higher PRS for bipolar disorder (SWEBIC: odds ratio [OR] 2.6 (95% confidence interval [CI] 1.9-3.5), PREFECT: OR 2.4 (95% CI 1.8-3.2), Figure 1.) and schizophrenia (SWEBIC: OR 1.6 (95% CI 1.2-2.2), PREFECT: OR 1.8 (95%; CI 1.3-2.5)). Patients with postpartum psychosis had significantly higher PRS for bipolar disorder (SWEBIC: OR 1.4 (95% CI 1.2-1.8), PREFECT: OR 1.5 (95% CI 1.1-2)) compared with parous women with severe mental illness without postpartum psychosis. We found no associations with major depression PRS in either cohort.Image:ConclusionsWe replicated previous findings of significantly higher PRS for bipolar disorder and schizophrenia in postpartum psychosis compared with healthy controls. In contrast to previous research, we find postpartum psychosis cases to have higher PRS for bipolar disorder than bipolar disorder cases. Our findings highlight the genetic influence in postpartum psychosis and support previous genetic and epidemiological evidence that postpartum psychosis lies on the bipolar spectrum.Disclosure of InterestNone Declared
APA, Harvard, Vancouver, ISO, and other styles
10

Choi, Seyun, Sukjun Hong, Hoijun Kim, Seunghyun Lee, and Soonchul Kwon. "Prefetching Method for Low-Latency Web AR in the WMN Edge Server." Applied Sciences 13, no. 1 (2022): 133. http://dx.doi.org/10.3390/app13010133.

Full text
Abstract:
Recently, low-latency services for large-capacity data have been studied given the development of edge servers and wireless mesh networks. The 3D data provided for augmented reality (AR) services have a larger capacity than general 2D data. In the conventional WebAR method, a variety of data such as HTML, JavaScript, and service data are downloaded when they are first connected. The method employed to fetch all AR data when the client connects for the first time causes initial latency. In this study, we proposed a prefetching method for low-latency AR services. Markov model-based prediction via the partial matching (PPM) algorithm was applied for the proposed method. Prefetched AR data were predicted during AR services. An experiment was conducted at the Nowon Career Center for Youth and Future in Seoul, Republic of Korea from 1 June 2022 to 31 August 2022, and a total of 350 access data points were collected over three months; the prefetching method reduced the average total latency of the client by 81.5% compared to the conventional method.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Predictive prefetch"

1

Lind, Tobias. "Evaluation of Instruction Prefetch Methods for Coresonic DSP Processor." Thesis, Linköpings universitet, Datorteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129128.

Full text
Abstract:
With increasing demands on mobile communication transfer rates the circuits in mobile phones must be designed for higher performance while maintaining low power consumption for increased battery life. One possible way to improve an existing architecture is to implement instruction prefetching. By predicting which instructions will be executed ahead of time the instructions can be prefetched from memory to increase performance and some instructions which will be executed again shortly can be stored temporarily to avoid fetching them from the memory multiple times. By creating a trace driven simulator the existing hardware can be simulated while running a realistic scenario. Different methods of instruction prefetch can be implemented into this simulator to measure how they perform. It is shown that the execution time can be reduced by up to five percent and the amount of memory accesses can be reduced by up to 25 percent with a simple loop buffer and return stack. The execution time can be reduced even further with the more complex methods such as branch target prediction and branch condition prediction.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Jasmine Yongqi. "Using idle workstations to implement parallel prefetch prediction." Thesis, 1999. http://hdl.handle.net/2429/9820.

Full text
Abstract:
The benefits of prefetching have been largely overshadowed by the overhead required to produce high quality predictions. Although theoretical and simulation-based results for prediction algorithms such as Prediction by Partial Matching (PPM) appear promising, practical results have thus far been disappointing. This outcome can be attributed to the fact that practical implementations ultimately make compromises in order to reduce overhead. These compromises limit the level of complexity, variety, and granularity used in the policies and mechanisms the implementation supports. This thesis examines the use of idle workstations to implement prediction-based prefetching. We propose a novel framework to leverage the resources in a system area network to reduce I/O stall time by prefetching non-resident pages into a target node's memory by an idle node. This configuration allows prediction to run in parallel with a target application. We have implemented a revised version of the GMS global memory system, called GMS-3P, that provides parallel prediction-based prefetching. We discuss the different algorithms we have chosen and the policies and mechanisms used to control the quality of predictions. We have also implemented a low overhead mechanism to communicate the history fault trace between the active node and the prediction node. This thesis also explores the needs of programs which have access patterns that cannot be captured by a single configuration of PPM. The dilemma associated with conventional prediction mechanisms that attempt to accommodate this behaviour is that varying the configuration adds overhead to the prediction mechanism. By moving the prediction mechanism to an idle node, we were able to add this functionality without compromising performance on the application node. Our results show that for some applications in GMS-3P, the I/O stall time can be reduced as much as 77%, while introducing an overhead of 4-8% on the node actively running the application.
APA, Harvard, Vancouver, ISO, and other styles
3

Gunal, Ugur. "The effectiveness of global difference value prediction and memory bus priority schemes for speculative prefetch." 2003. http://www.lib.ncsu.edu/theses/available/etd-06302003-223805/unrestricted/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Predictive prefetch"

1

Stache: A novel cache architecture using predictive prefetch. National Library of Canada = Bibliothèque nationale du Canada, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Predictive prefetch"

1

Choi, Inchul, and Chanik Park. "Enhancing Prediction Accuracy in PCM-Based File Prefetch by Constained Pattern Replacement Algorithm." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44864-0_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dai, Yi, Ke Wu, Mingche Lai, Qiong Li, and Dezun Dong. "PPS: A Low-Latency and Low-Complexity Switching Architecture Based on Packet Prefetch and Arbitration Prediction." In Algorithms and Architectures for Parallel Processing. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38991-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gupta, Ajay Kumar, and Udai Shanker. "An Efficient Markov Chain Model Development based Prefetching in Location-Based Services." In Privacy and Security Challenges in Location Aware Computing. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-7756-1.ch005.

Full text
Abstract:
A quite significant issue with the current location-based services application is to securely store information for users on the network in order to quickly access the data items. One way to do this is to store data items that have a high likelihood of subsequent request. This strategy is known as proactive caching or prefetching. It is a technique in which selected information is cached before it is actually needed. In comparison, past constructive caching strategies showed high data overhead in terms of computing costs. Therefore, with the use of Markov chain model, the aim of this work is to address the above problems by an efficient user future position movement prediction strategy. For modeling of the proposed system to evaluate the feasibility of accessing information on the network for location-based applications, the client-server queuing model is used in this chapter. The observational findings indicate substantial improvements in caching efficiency to previous caching policies that did not use prefetch module.
APA, Harvard, Vancouver, ISO, and other styles
4

Guo Yong Zhen, Ramamohanarao Kotagiri, and Park Laurence A. F. "Web Page Prediction Based on Conditional Random Fields." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2008. https://doi.org/10.3233/978-1-58603-891-5-251.

Full text
Abstract:
Web page prefetching is used to reduce the access latency of the Internet. However, if most prefetched Web pages are not visited by the users in their subsequent accesses, the limited network bandwidth and server resources will not be used efficiently and may worsen the access delay problem. Therefore, it is critical that we have an accurate prediction method during prefetching. Conditional Random Fields (CRFs), which are popular sequential learning models, have already been successfully used for many Natural Language Processing (NLP) tasks such as POS tagging, name entity recognition (NER) and segmentation. In this paper, we propose the use of CRFs in the field of Web page prediction. We treat the accessing sessions of previous Web users as observation sequences and label each element of these observation sequences to get the corresponding label sequences, then based on these observation and label sequences we use CRFs to train a prediction model and predict the probable subsequent Web pages for the current users. Our experimental results show that CRFs can produce higher Web page prediction accuracy effectively when compared with other popular techniques like plain Markov Chains and Hidden Markov Models (HMMs).
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Predictive prefetch"

1

Katseff, H., and B. Robinson. "Predictive prefetch in the Nemesis multimedia information service." In the second ACM international conference. ACM Press, 1994. http://dx.doi.org/10.1145/192593.192656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nicolas, Louis-Marie, Salim Mimouni, Philippe Couvée, and Jalil Boukhobza. "GrIOt: Graph-based Modeling of HPC Application I/O Call Stacks for Predictive Prefetch." In SC-W 2023: Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis. ACM, 2023. http://dx.doi.org/10.1145/3624062.3624189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Koizumi, Toru, Tomoki Nakamura, Yuya Degawa, Hidetsugu Irie, Shuichi Sakai, and Ryota Shioya. "T-SKID: Predicting When to Prefetch Separately from Address Prediction." In 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2022. http://dx.doi.org/10.23919/date54114.2022.9774765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Arifin, Hasan Nur, Leanna Vidya Yovita, and Nana Rachmana Syambas. "Proactive Caching of Mobility Prediction Prefetch and Non-Prefetch in ICN." In 2019 International Conference on Electrical Engineering and Informatics (ICEEI). IEEE, 2019. http://dx.doi.org/10.1109/iceei47359.2019.8988885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Borkowski, Michael, Olena Skarlat, Stefan Schulte, and Schahram Dustdar. "Prediction-Based Prefetch Scheduling in Mobile Service Applications." In 2016 IEEE International Conference on Mobile Services (MS). IEEE, 2016. http://dx.doi.org/10.1109/mobserv.2016.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liao, Shih-wei, Tzu-Han Hung, Donald Nguyen, Hucheng Zhou, Chinyen Chou, and Chiaheng Tu. "Prefetch optimizations on large-scale applications via parameter value prediction." In the 23rd international conference. ACM Press, 2009. http://dx.doi.org/10.1145/1542275.1542359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Parate, Abhinav, Matthias Böhmer, David Chu, Deepak Ganesan, and Benjamin M. Marlin. "Practical prediction and prefetch for faster access to applications on mobile phones." In UbiComp '13: The 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 2013. http://dx.doi.org/10.1145/2493432.2493490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Batcher, Ken, and Robert Walker. "Cluster miss prediction with prefetch on miss for embedded CPU instruction caches." In the 2004 international conference. ACM Press, 2004. http://dx.doi.org/10.1145/1023833.1023839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jamet, Alexandre Valentin, Georgios Vavouliotis, Daniel A. Jiménez, Lluc Alvarez, and Marc Casas. "A Two Level Neural Approach Combining Off-Chip Prediction with Adaptive Prefetch Filtering." In 2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEE, 2024. http://dx.doi.org/10.1109/hpca57654.2024.00046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Saichandana, S., and Kavitha Sooda. "An Analysis for the Prediction of Prefetched Content on Social Media." In 2021 International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT). IEEE, 2021. http://dx.doi.org/10.1109/rteict52294.2021.9573858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!