Добірка наукової літератури з теми "Prefetch techniques"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Prefetch techniques".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Prefetch techniques"

1

VERMA, SANTHOSH, and DAVID M. KOPPELMAN. "THE INTERACTION AND RELATIVE EFFECTIVENESS OF HARDWARE AND SOFTWARE DATA PREFETCH." Journal of Circuits, Systems and Computers 21, no. 02 (2012): 1240002. http://dx.doi.org/10.1142/s0218126612400026.

Повний текст джерела
Анотація:
A major performance limiter in modern processors is the long latencies caused by data cache misses. Both compiler- and hardware-based prefetching schemes help hide these latencies and so improve performance. Compiler techniques infer memory access patterns through code analysis, and insert appropriate prefetch instructions. Hardware prefetching techniques work independently from the compiler by monitoring an access stream, detecting patterns in this stream and issuing prefetches based on these patterns. This paper looks at the interplay between compiler and hardware architecture-based prefetching techniques. Does either technique make the other one unnecessary? First, compilers' ability to achieve good results without extreme expertise is evaluated by preparing binaries with no prefetch, one-flag prefetch (no tuning), and expertly tuned prefetch. From runs of SPECcpu2006 binaries, we find that expertise avoids minor slowdown in a few benchmarks and provides substantial speedup in others. We compare software schemes to hardware prefetching schemes and our simulations show software alone substantially outperforms hardware alone on about half of a selection of benchmarks. While hardware matches or exceeds software in a few cases, software is better on average. Analysis reveals that in many cases hardware is not prefetching access patterns that it is capable of recognizing, due to some irregularities in the observed miss sequence. Hardware outperforms software on address sequences that the compiler would not guess. In general, while software is better at prefetching individual loads, hardware partly compensates for this by identifying more loads to prefetch. Using the two schemes together provides further benefits, but less than the sum of the contributions of each alone.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Srivastava, Swapnita, and P. K. Singh. "ADDP: The Data Prefetching Protocol for Monitoring Capacity Misses." ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal 14 (April 11, 2025): e31782. https://doi.org/10.14201/adcaij.31782.

Повний текст джерела
Анотація:
Prefetching is essential to minimizing the number of misses in cache and improving processor performance. Many prefetchers have been proposed, including simple but highly effective stream-based prefetchers and prefetchers that predict complex access patterns based on structures such as history buffers and bit vectors. However, many cache misses still occur in many applications. After analyzing the various techniques in Instruction and Data Prefetcher, several key features were extracted which impact system performance. Data prefetching is an essential technique used in all commercial processors. Data prefetchers aim at hiding the long data access latency. In this paper, we present the design of an Adaptive Delta-based Data Prefetching (ADDP) that employs four different tables organized in a hierarchical manner to address the diversity of access patterns. Firstly, the Entry Table is queue, which tracks recent cache fill. Secondly, the Predict Table which has trigger (Program Counter) PCs as tags. Thirdly, the (Address Difference Table) ADT which has target PCs as tags. Lastly, the Prefetch Table is divided into two parts, i.e., Prefetch Filter and the actual Prefetch Table. The Prefetch Filter table filters unnecessary prefetch accesses and the Prefetch Table is used to track other additional information for each prefetch. The ADDP has been implemented in a multicache-level prefetching system under the 3rd Data Prefetching Championship (DPC-3) framework. ADDP is an effective solution for data-intensive applications since it shows notable gains in cache hit rates and latency reduction. The simulation results show that ADDP outperforms the top three data prefetchers MLOP, SPP and BINGO by 5.312 %, 13.213 % and 10.549 %, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Deb, Dipika, and John Jose. "ZPP: A Dynamic Technique to Eliminate Cache Pollution in NoC based MPSoCs." ACM Transactions on Embedded Computing Systems 22, no. 5s (2023): 1–25. http://dx.doi.org/10.1145/3609113.

Повний текст джерела
Анотація:
Data prefetching efficiently reduces the memory access latency in NUCA architectures as the Last Level Cache (LLC) is shared and distributed across multiple cores. But cache pollution generated by prefetcher reduces its efficiency by causing contention for shared resources such as LLC and the underlying network. The paper proposes Zero Pollution Prefetcher (ZPP) that eliminates cache pollution for NUCA architecture. For this purpose, ZPP uses L1 prefetcher and places the prefetched blocks in the data locations of LLC where modified blocks are stored. Since modified blocks in LLC are stale and request for such blocks are served from the exclusively owned private cache, their space unnecessary consumes power to maintain such stale data in the cache. The benefits of ZPP are (a) Eliminates cache pollution in L1 and LLC by storing prefetched blocks in LLC locations where stale blocks are stored. (b) Insufficient cache space is solved by placing prefetched blocks in LLC as LLCs are larger in size than L1 cache. This helps in prefetching more cache blocks, thereby increasing prefetch aggressiveness. (c) Increasing prefetch aggressiveness increases its coverage. (d) It also maintains an equivalent lookup latency to L1 cache for prefetched blocks. Experimentally it has been found that ZPP increases weighted speedup by 2.19x as compared to a system with no prefetching while prefetch coverage and prefetch accuracy increases by 50%, and 12%, respectively compared to the baseline. 1
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Alves, Ricardo, Stefanos Kaxiras, and David Black-Schaffer. "Early Address Prediction." ACM Transactions on Architecture and Code Optimization 18, no. 3 (2021): 1–22. http://dx.doi.org/10.1145/3458883.

Повний текст джерела
Анотація:
Achieving low load-to-use latency with low energy and storage overheads is critical for performance. Existing techniques either prefetch into the pipeline (via address prediction and validation) or provide data reuse in the pipeline (via register sharing or L0 caches). These techniques provide a range of tradeoffs between latency, reuse, and overhead. In this work, we present a pipeline prefetching technique that achieves state-of-the-art performance and data reuse without additional data storage, data movement, or validation overheads by adding address tags to the register file. Our addition of register file tags allows us to forward (reuse) load data from the register file with no additional data movement, keep the data alive in the register file beyond the instruction’s lifetime to increase temporal reuse, and coalesce prefetch requests to achieve spatial reuse. Further, we show that we can use the existing memory order violation detection hardware to validate prefetches and data forwards without additional overhead. Our design achieves the performance of existing pipeline prefetching while also forwarding 32% of the loads from the register file (compared to 15% in state-of-the-art register sharing), delivering a 16% reduction in L1 dynamic energy (1.6% total processor energy), with an area overhead of less than 0.5%.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bishwa Ranjan Roy, Purnendu Das, Nurulla Mansur Barbhuiya,. "PP-Bridge: Establishing a Bridge between the Prefetching and Cache Partitioning." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (2023): 897–906. http://dx.doi.org/10.17762/ijritcc.v11i9.8982.

Повний текст джерела
Анотація:
— Modern computer processors are equipped with multiple cores, each boasting its own dedicated cache memory, while collectively sharing a generously sized Last Level Cache (LLC). To ensure equitable utilization of the LLC space and bolster system security, partitioning techniques have been introduced to allocate the shared LLC space among the applications running on different cores. This partition dynamically adapts to the requirements of these applications. Prefetching plays a vital role in enhancing cache performance by proactively loading data into the cache before it get requested explicitly by a core. Each core employs prefetch engines to decide which data blocks to fetch preemptively. However, a haphazard prefetcher may bring in more data blocks than necessary, leading to cache pollution and a subsequent degradation in system performance. To maximize the benefits of prefetching, it is essential to keep cache pollution to a minimum. Intriguingly, our research has uncovered that when existing prefetching techniques are combined with partitioning methods, they tend to exacerbate cache pollution within the LLC, resulting in a noticeable decline in system performance. In this paper, we present a novel approach aimed at mitigating cache pollution when combining prefetching with partitioning techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hariharan, I., and M. Kannan. "Efficient Use of On-Chip Memories and Scheduling Techniques to Eliminate the Reconfiguration Overheads in Reconfigurable Systems." Journal of Circuits, Systems and Computers 28, no. 14 (2019): 1950246. http://dx.doi.org/10.1142/s0218126619502463.

Повний текст джерела
Анотація:
Modern embedded systems are packed with dedicated Field Programmable Gate Arrays (FPGAs) to accelerate the overall system performance. However, the FPGAs are susceptible to reconfiguration overheads. The reconfiguration overheads are mainly because of the configuration data being fetched from the off-chip memory at run-time and also due to the improper management of tasks during execution. To reduce these overheads, our proposed methodology mainly focuses on the prefetch heuristic, reuse technique, and the available memory hierarchy to provide an efficient mapping of tasks over the available memories. Our paper includes a new replacement policy which reduces the overall time and energy reconfiguration overheads for static systems in their subsequent iterations. It is evident from the result that most of the reconfiguration overheads are eliminated when the applications are managed and executed based on our methodology.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Liang, Ye. "Big Data Storage Method in Wireless Communication Environment." Advanced Materials Research 756-759 (September 2013): 899–904. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.899.

Повний текст джерела
Анотація:
Big data phenomenon refers to the practice of collection and processing of very large data sets and associated systems and algorithms used to analyze these massive data sets. Big data service is very attractive in the field of wireless communication environment, especially when we face the spatial applications, which are typical applications of big data. Because of the complexity to ingest, store and analyze geographical information data, this paper reflects on a few of the technical problems presented by the exploration of big data, and puts forward an effective storage method in wireless communication environment, which is based on the measurement of moving regularity through proposing three key techniques: partition technique, index technique and prefetch technique. Experimental results show that the performance of big data storage method using these new techniques is better than the other storage methods on managing a great capacity of big data in wireless communication environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

NATARAJAN, RAGAVENDRA, VINEETH MEKKAT, WEI-CHUNG HSU, and ANTONIA ZHAI. "EFFECTIVENESS OF COMPILER-DIRECTED PREFETCHING ON DATA MINING BENCHMARKS." Journal of Circuits, Systems and Computers 21, no. 02 (2012): 1240006. http://dx.doi.org/10.1142/s0218126612400063.

Повний текст джерела
Анотація:
For today's increasingly power-constrained multicore systems, integrating simpler and more energy-efficient in-order cores becomes attractive. However, since in-order processors lack complex hardware support for tolerating long-latency memory accesses, developing compiler technologies to hide such latencies becomes critical. Compiler-directed prefetching has been demonstrated effective on some applications. On the application side, a large class of data centric applications has emerged to explore the underlying properties of the explosively growing data. These applications, in contrast to traditional benchmarks, are characterized by substantial thread-level parallelism, complex and unpredictable control flow, as well as intensive and irregular memory access patterns. These applications are expected to be the dominating workloads on future microprocessors. Thus, in this paper, we investigated the effectiveness of compiler-directed prefetching on data mining applications in in-order multicore systems. Our study reveals that although properly inserted prefetch instructions can often effectively reduce memory access latencies for data mining applications, the compiler is not always able to exploit this potential. Compiler-directed prefetching can become inefficient in the presence of complex control flow and memory access patterns; and architecture dependent behaviors. The integration of multithreaded execution onto a single die makes it even more difficult for the compiler to insert prefetch instructions, since optimizations that are effective for single-threaded execution may or may not be effective in multithreaded execution. Thus, compiler-directed prefetching must be judiciously deployed to avoid creating performance bottlenecks that otherwise do not exist. Our experiences suggest that dynamic performance tuning techniques that adjust to the behaviors of a program can potentially facilitate the deployment of aggressive optimizations in data mining applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

T.M, Veeragangadhara swamy, and Raju G.T. "A Novel Prefetching Technique through Frequent Sequential Patterns from Web Usage Data." COMPUSOFT: An International Journal of Advanced Computer Technology 04, no. 06 (2015): 1826–36. https://doi.org/10.5281/zenodo.14785813.

Повний текст джерела
Анотація:
Frequent sequential patterns (fsp) from web usage data (wud) are very important for analyzing and understanding users behavior to improve the quality of services offered by the world wide web(www). Web prefetching is one of the techniques for reducing the web latency there by improve the web retrieval process. This technique makes use of prefetching rules that are derived from fsps. In this paper, we explore the different fsp mining algorithms such as spm, fp growth, and spade for extraction of fsps from wud of an academic website for a period that varies from weekly to quarterly. Performance analysis on all of these fsp algorithms has been made against the number of fsps they generate with a given minimum support. Experimental results shows that spade fsp mining algorithm perform better compared to spm and fp growth algorithms. Based on the fsps, we propose a novel prefetching technique that generate prefetching rules from the fsps and prefetch the web pages so as to reduce the users’ perceived latency. 
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Li, Haoyu, Qizhi Chen, Yixin Zhang, Tong Yang, and Bin Cui. "Stingy sketch." Proceedings of the VLDB Endowment 15, no. 7 (2022): 1426–38. http://dx.doi.org/10.14778/3523210.3523220.

Повний текст джерела
Анотація:
Recording the frequency of items in highly skewed data streams is a fundamental and hot problem in recent years. The literature demonstrates that sketch is the most promising solution. The typical metrics to measure a sketch are accuracy and speed, but existing sketches make only trade-offs between the two dimensions. Our proposed solution is a new sketch framework called Stingy sketch with two key techniques: Bit-pinching Counter Tree ( BCTree ) and Prophet Queue ( PQueue ) which optimizes both the accuracy and speed. The key idea of BCTree is to split a large fixed-size counter into many small nodes of a tree structure, and to use a precise encoding to perform carry-in operations with low processing overhead. The key idea of PQueue is to use pipelined prefetch technique to make most memory accesses happen in L2 cache without losing precision. Importantly, the two techniques are cooperative so that Stingy sketch can improve accuracy and speed simultaneously. Extensive experimental results show that Stingy sketch is up to 50% more accurate than the SOTA of accuracy-oriented sketches and is up to 33% faster than the SOTA of speed-oriented sketches.
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Дисертації з теми "Prefetch techniques"

1

Chang, Nelson Yen-Chung, and 張彥中. "Cache Prefetch Techniques and Bus Bridge Design in SOC." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/01054116522540066658.

Повний текст джерела
Анотація:
碩士<br>國立交通大學<br>電子工程系<br>90<br>Cache prefetching has long been known in reducing cache miss rate, and in hiding memory access latencies seen by the processor in a processor-based system. This provides a chance to implement a smaller cache with prefetch mechanism to achieve same miss rate with larger cache without prefetching, hence reducing the cache hardware cost. Though reducing the miss rate improves the performance of a cache, the extra prefetch memory requests increases the overall system bus traffics. The increased bus traffic sometimes diminishes the overall performance of a system, even with the reduced miss rate. In an embedded SOC system, there are more devices that access through the shared system bus. Therefore the heavy traffic of the system bus will limit the benefits of applying cache prefetching techniques to an embedded system. Since the hardware prefetching approach takes the advantage of run-time information, and can take the system bus status into consideration, it is more suitable for embedded systems with multiple master devices. In this thesis, we investigate the characteristics of several hardware cache prefetching techniques. Then we proposed a new cache prefetching named reference time stride prefetch (RTSP) scheme incorporating access timing information, and a system bus bridge design with access reordering for the processor to solve the bus congestion problem. The effect of each relevant parameters and how the prefetching affects an embedded system are revealed by running cycle-by-cycle trace-driven simulations of an embedded system model with an ARM7TDMI core and AHB system bus. The simulation result shows that RTSP can reduce 8.8% of average data reference time and more than 90% of data miss rate compared with an unprefetched system.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Jen, Hsung, and 任軒. "Reconfiguration Overhead Reduction Using Prefetch and Merge Techniques in Run-Time Reconfigurable System." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/21085391635759437079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Huang, Hsuan-Woei, and 黃宣偉. "A Study on Prefetch and Compiler Assistant Techniques for Clustering Multiprocessor System Design and Implementation of Its Simulation and Evaluation Environment." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/20226792088898404435.

Повний текст джерела
Анотація:
碩士<br>國立交通大學<br>資訊工程學系<br>86<br>Recently, shared-memory multiprocessor systems have become one of the design trends in computer system architectures. As the increasingly need of computation, multiprocessor system with more processors on it becomes an unavoidable trend. Thus, clustering multiprocessor system has indeed played an important role due to its high scalability and data locality. We had developed a simulation and evaluation environment for clustering multiprocessor system, which aims at investigating the key design issues of its memory subsystem. Our environment is a program-driven simulator, consisting of a memory reference generator supported by MINT and a memory subsystem simulator that we have designed. The memory subsystem simulator can support several simulation modules, including two- level cache, local bus, inter-cluster cache, cache coherence protocols, and interconnection network. With the aid of this environment, we had studied major design issues of clustering multiprocessor system, including data prefetching and parallel compiler assistant techniques. Based on a great deal of evaluation results, we have found that clustering may have good scalability compared with non-clustering architectures. By the way, the size of cluster node and related issues are also investigated. Moreover, data prefetching techniques boost the performance gain in clustering multiprocessor system, especially with our clus-prefetch and combined-prefetch design. Besides, we also provide some assistant techniques for parallel compiler to improve performance of clustering multiprocessor system in some degree.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Cai, Jie. "Region-based techniques for modeling and enhancing cluster OpenMP performance." Phd thesis, 2011. http://hdl.handle.net/1885/8865.

Повний текст джерела
Анотація:
Cluster OpenMP enables the use of the OpenMP shared memory programming clusters. Intel has released a cluster OpenMP implementation called Intel Cluster OpenMP (CLOMP). While this offers better programmability than message passing alternatives such as the Message Passing Interface (MPI), such convenience comes with overheads resulting from having to maintain the consistency of underlying shared memory abstractions. CLOMP is no exception. This thesis introduces models for understanding these overheads of cluster OpenMP implementations like CLOMP and proposes techniques for enhancing their performance. Cluster OpenMP systems are usually implemented using page-based software distributed shared memory systems. A key issue for such system is maintaining the consistency of the shared memory space. This forms a major source of overhead, and it is driven by detecting and servicing page faults. To understand these systems, we evaluate their performance with different OpenMP applications, and we also develop a benchmark, called MCBENCH, to characterize the memory consistency costs. Using MCBENCH, we discover that this overhead is proportional to the number of writers to the same shared page and the number of shared pages. Furthermore, we divide an OpenMP program into parallel and serial regions. Based on the regions, we develop two region-based models to rationalize the numbers and types of the page faults and their associated costs to performance. The models highlight the fact that the major overhead is servicing the type of page faults, which requires data to be transferred across a network. With this understanding, we have developed three region-based prefetch (ReP) techniques based on the execution history of each region. The first ReP technique (TReP) considers temporal paging behaviour between consecutive executions of the same region. The second technique (HReP) considers both the temporal paging behaviour between consecutive region executions and the spatial paging behaviour within a region execution. The last technique (DReP) utilizes a novel stride-augmented run length encoding (sRLE) method to address the both the temporal and spatial paging behaviour between consecutive region executions. RePs effectively reduce the number of page faults and aggregate data into larger transfers, which leverages the network bandwidth provided by interconnects. All three ReP techniques are implemented into runtime libraries of CLOMP to enhance its performance. Both the original and the enhanced CLOMP are evaluated using the NAS Parallel Benchmark OpenMP (NPB-OMP) suite, and two LINPACK OpenMP benchmarks on two clusters connected with Ethernet and InfiniBand interconnects. The performance data is quantitatively analyzed and modeled. MCBENCH is used to evaluate the impact of ReP techniques on memory consistency cost. The evaluation results demonstrate that, on average, CLOMP spends 75% and 55% overall elapsed time of the NPB-OMP benchmarks on Gigabit Ethernet and double data rate InfiniBand network respectively. These ratios of the NPB-OMP benchmarks are reduced effectively by ?60% and ?40% after implementing the ReP techniques on to the CLOMP runtime. For the LINPACK benchmarks, with the assistance of sRLE, DReP significantly outperforms the other ReP techniques with effectively reducing 50% and 58% of page fault handling costs on the Ethernet and InfiniBand networks respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Prefetch techniques"

1

Hong, Maria, Euisun Kang, Sungmin Um, Dongho Kim, and Younghwan Lim. "A Transcode and Prefetch Technique of Multimedia Presentations for Mobile Terminals." In Computational Science and Its Applications – ICCSA 2004. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24707-4_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Gupta, Ajay Kumar, and Udai Shanker. "An Efficient Markov Chain Model Development based Prefetching in Location-Based Services." In Privacy and Security Challenges in Location Aware Computing. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-7756-1.ch005.

Повний текст джерела
Анотація:
A quite significant issue with the current location-based services application is to securely store information for users on the network in order to quickly access the data items. One way to do this is to store data items that have a high likelihood of subsequent request. This strategy is known as proactive caching or prefetching. It is a technique in which selected information is cached before it is actually needed. In comparison, past constructive caching strategies showed high data overhead in terms of computing costs. Therefore, with the use of Markov chain model, the aim of this work is to address the above problems by an efficient user future position movement prediction strategy. For modeling of the proposed system to evaluate the feasibility of accessing information on the network for location-based applications, the client-server queuing model is used in this chapter. The observational findings indicate substantial improvements in caching efficiency to previous caching policies that did not use prefetch module.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Huang, Shi-Ming, Binshan Lin, and Qun-Shi Deng. "Intelligent Cache Management for Mobile Data Warehouse Systems." In Data Warehousing and Mining. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-951-9.ch088.

Повний текст джерела
Анотація:
This research proposes an intelligent cache mechanism for a data warehouse system in a mobile environment. Because mobile devices can often be disconnected from the host server and due to the low bandwidth of wireless networks, it is more efficient to store query results from a mobile device in the cache. For more personal use of mobile devices, we use a data mining technique to determine the pattern from a record of previous queries. Then the data, which will be retrieved by the user, are prefetched and stored in the cache, thus, improving the query efficiency. We demonstrate the feasibility of the proposed approach with experiments using simulation. Comparison of our approach with a standard approach indicates that there is a significant advantage to using mobile data warehouse systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Macmaster, Neil. "The Arzew Camp." In War in the Mountains. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198860211.003.0019.

Повний текст джерела
Анотація:
The chapter examines the success of the forms of psychological warfare deployed during Opération Pilote. A key element of Servier’s plan was to recruit peasants to undertake a crash training programme in the COIN centre at Arzew, so that they could be secretly reinserted in the douars to act as future political leaders. The first cohort proved to be of mediocre ability, and their placement in the douars, known to the FLN, proved to be perilous. The army turned to other techniques of mass brainwashing of the rural population, who were either subjected to propaganda teams or, at Warnier in the Chelif, placed in ‘re-education’ camps. Anthropology, promoted by Servier, was marginalized since army officers could not be rapidly trained in the necessary language and ethnology skills, and instead the army relied on behaviourist theories of conditioned reflexes and mechanical forms of mass indoctrination by repetition of slogans. The prefect, and some officers, were deeply scathing of the impacts of such brainwashing techniques. By August 1957 Opération Pilote was wound down but, despite its major failure, was promoted by top commanders as a great success, and was rapidly expanded across Algeria. The claims made for the experiment were supported by dubious forms of psychological mapping that claimed to plot the success of ‘pacification’.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Prakash, Amit. "Imperial Sentinels." In Empire on the Seine. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780192898876.003.0005.

Повний текст джерела
Анотація:
In 1958, the new Prefect of Police, Maurice Papon, grew and coordinated the constellation of police services dedicated to Algerians but often swept up Tunisians, Moroccans, and, in some cases, nationals from South American countries. The flagrant, often violent, racialized surveillance of Paris was conceived by police officials as necessary to preserve the empire. In effect, the Paris police acted as sentinels on the imperial ramparts, their activity characterized by growing militarization and the introduction counterinsurgency techniques borne of the wars in Indo-China and Algeria. In addition to police pressures upon the North African community in Paris were the effects of the “fraternal” war between competing Algerian nationalist organizations, chiefly the MNA and the FLN. These groups had their own surveillance and disciplinary mechanisms, which meant that the North African community was a target of social and political control from various directions. This chapter provides a history of this dangerous period for the North African community through an analysis of police records and materials emanating from the FLN and MNA.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

House, Jim, and Neil Macmaster. "Papon and the Colonial Origins of Police Violence." In Paris 1961: Algerians, State Terror, and Memory. Oxford University PressOxford, 2006. http://dx.doi.org/10.1093/oso/9780199247257.003.0003.

Повний текст джерела
Анотація:
Abstract In March 1958 the French government, faced with a crisis in the Paris police force, flew Maurice Papon from Algeria into the capital to provide strong leadership to the Prefecture of Police and to accelerate the battle against the Front de libération nationale. During the remaining four years of the Algerian War Papon was the architect of a novel and far-reaching police and intelligence system. The purpose of this chapter is to examine the origins of this ‘Papon System’ by following the career of the Prefect from Vichy through the politically unstable years of the Fourth Republic. The aim is not to provide a comprehensive biography of Papon but to take note of those key experiences through which he built up expertise in counter-insurgency and, in particular, techniques for the policing of minority populations and nationalist insurgents that he would adapt later to the Paris context. Secondly, throughout this period (c.1940–58), Papon shared the traditional culture of the prefectoral corps which, as loyal servants of the Republic, affected to stand independent of party politics.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Guo Yong Zhen, Ramamohanarao Kotagiri, and Park Laurence A. F. "Web Page Prediction Based on Conditional Random Fields." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2008. https://doi.org/10.3233/978-1-58603-891-5-251.

Повний текст джерела
Анотація:
Web page prefetching is used to reduce the access latency of the Internet. However, if most prefetched Web pages are not visited by the users in their subsequent accesses, the limited network bandwidth and server resources will not be used efficiently and may worsen the access delay problem. Therefore, it is critical that we have an accurate prediction method during prefetching. Conditional Random Fields (CRFs), which are popular sequential learning models, have already been successfully used for many Natural Language Processing (NLP) tasks such as POS tagging, name entity recognition (NER) and segmentation. In this paper, we propose the use of CRFs in the field of Web page prediction. We treat the accessing sessions of previous Web users as observation sequences and label each element of these observation sequences to get the corresponding label sequences, then based on these observation and label sequences we use CRFs to train a prediction model and predict the probable subsequent Web pages for the current users. Our experimental results show that CRFs can produce higher Web page prediction accuracy effectively when compared with other popular techniques like plain Markov Chains and Hidden Markov Models (HMMs).
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Fleming, James R. "Joseph Fourier’s Theory of Terrestrial Temperatures." In Historical Perspectives on Climate Change. Oxford University Press, 1998. http://dx.doi.org/10.1093/oso/9780195078701.003.0010.

Повний текст джерела
Анотація:
The concept of the greenhouse effect has yet to receive adequate historical attention. Although most writing ahout the subject is concerned with current scientific or policy issues, a small but growing fraction of the literature contains at least some historical material, which, as this chapter shows for the case of Joseph Fourier, is largely unreliable. Jean Baptiste Joseph Fourier is best known today for his Fourier series, a widely used mathematical technique in which complex functions can be represented by a series of sines and cosines. He is known among physicists and historians of physics for his book Théorie analytique de la chaleur (1822), an elegant but not very precise work that Lord Kelvin described as “a great mathematical poem.” Most of his contemporaries knew him as an administrator, Egyptologist, and scientist. Fourier’s fortunes rose and fell with the political tides. He was a mathematics teacher, a secret policeman, a political prisoner (twice), governor of Egypt, prefect of Isère and Rhône, friend of Napoleon, baron, outcast, and perpetual member and secretary of the French Academy of Sciences. Most people writing on the history of the greenhouse effect merely cite in passing Fourier’s descriptive memoir of 1827 as the “first” to compare the heating of the Earth’s atmosphere to the action of glass in a greenhouse. There is usually no evidence that they have read Fourier’s original papers or manuscripts (in French) or have searched beyond the obvious secondary sources. Nor are most authors aware that Fourier’s paper, usually cited as 1827, was actually read to the Académie Royale des Sciences in 1824, published that same year in the Annales de Chimie et de Physique, and translated into English in the American Journal of Science in 1837! No one cites Fourier’s earlier references to greenhouses in his magnum opus of 1822 and in his earlier papers. Nor do they identify the subject of terrestrial temperatures as a key motivating factor in all of Fourier’s theoretical and experimental work on heat. Moreover, existing accounts assume far too much continuity in scientific understanding of the greenhouse effect from Fourier to today.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Prefetch techniques"

1

Heirman, Wim, Kristof Du Bois, Yves Vandriessche, Stijn Eyerman, and Ibrahim Hur. "Near-side prefetch throttling." In PACT '18: International conference on Parallel Architectures and Compilation Techniques. ACM, 2018. http://dx.doi.org/10.1145/3243176.3243181.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cai, Jie, Peter E. Strazdins, and Alistair P. Rendell. "Region-Based Prefetch Techniques for Software Distributed Shared Memory Systems." In 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing. IEEE, 2010. http://dx.doi.org/10.1109/ccgrid.2010.16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Holtryd, Nadja Ramhoj, Madhavan Manivannan, Per Stenstrom, and Miquel Pericas. "CBP: Coordinated management of cache partitioning, bandwidth partitioning and prefetch throttling." In 2021 30th International Conference on Parallel Architectures and Compilation Techniques (PACT). IEEE, 2021. http://dx.doi.org/10.1109/pact52795.2021.00023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Qu, Wenxin, Xiaoya Fan, Ying Hu, Yong Xia, and Fuyuan Hu. "New Prefetch Technique Design for L2 Cache." In TENCON 2006 - 2006 IEEE Region 10 Conference. IEEE, 2006. http://dx.doi.org/10.1109/tencon.2006.344002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Choi, Hong Jun, Dong Oh Son, Cheol Hong Kim, and Jong Myron Kim Kim. "A Novel Prefetch Technique for High Performance Embedded System." In 2014 International Conference on IT Convergence and Security (ICITCS). IEEE, 2014. http://dx.doi.org/10.1109/icitcs.2014.7021713.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Yuan He, Hiroshi Sasaki, Shinobu Miwa, and Hiroshi Nakamura. "TCPT: thread criticality-driven prefetcher throttling." In 22nd International Conference on Parallel Architectures and Compilation Techniques (PACT). IEEE, 2013. http://dx.doi.org/10.1109/pact.2013.6618828.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Panda, Biswabandan, and Shankar Balachandran. "TCPT - Thread criticality-driven prefetcher throttling." In 2013 22nd International Conference on Parallel Architectures and Compilation Techniques (PACT). IEEE, 2013. http://dx.doi.org/10.1109/pact.2013.6618835.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mohapatra, Shubdeep, and Biswabandan Panda. "Drishyam: An Image is Worth a Data Prefetcher." In 2023 32nd International Conference on Parallel Architectures and Compilation Techniques (PACT). IEEE, 2023. http://dx.doi.org/10.1109/pact58117.2023.00013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Cai, Jie, and Peter E. Strazdins. "An Accurate Prefetch Technique for Dynamic Paging Behaviour for Software Distributed Shared Memory." In 2012 41st International Conference on Parallel Processing (ICPP). IEEE, 2012. http://dx.doi.org/10.1109/icpp.2012.16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Irie, Hidetsugu, Takefumi Miyoshi, Goki Honjo, Kei Hiraki, and Tsutomu Yoshinaga. "CCCPO: Robust Prefetcher Optimization Technique Based on Cache Convection." In 2011 Second International Conference on Networking and Computing (ICNC). IEEE, 2011. http://dx.doi.org/10.1109/icnc.2011.26.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!