Dissertations / Theses on the topic 'Reconfigurable caches'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 23 dissertations / theses for your research on the topic 'Reconfigurable caches.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Ramaswamy, Subramanian. "Active management of Cache resources." Diss., Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24663.
Full textBrewer, Jeffery R. "Reconfigurable cache memory /." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1885437651&sid=8&Fmt=2&clientId=1509&RQT=309&VName=PQD.
Full textBrewer, Jeffery Ramon. "Reconfigurable Cache Memory." OpenSIUC, 2009. https://opensiuc.lib.siu.edu/theses/48.
Full textJUPALLY, RAGHAVENDRA PRASADA RAO. "IMPLEMENTATION OF RECONFIGURABLE COMPUTING CACHE ARCHITECTURE." OpenSIUC, 2010. https://opensiuc.lib.siu.edu/theses/336.
Full textBond, Paul Joseph. "Design and analysis of reconfigurable and adaptive cache structures." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/14983.
Full textHo, Nam [Verfasser]. "FPGA-based reconfigurable cache mapping schemes: design and optimization / Nam Ho." Paderborn : Universitätsbibliothek, 2018. http://d-nb.info/1167856481/34.
Full textBani, Ruchi Rastogi Mohanty Saraju. "A new N-way reconfigurable data cache architecture for embedded systems." [Denton, Tex.] : University of North Texas, 2009. http://digital.library.unt.edu/ark:/67531/metadc12079.
Full textBani, Ruchi Rastogi. "A New N-way Reconfigurable Data Cache Architecture for Embedded Systems." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc12079/.
Full textJunior, Roberto Borges Kerr. "Proposta e desenvolvimento de um algoritmo de associatividade reconfigurável em memórias cache." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-01102008-135441/.
Full textWith the constant evolution of processors architecture, its getting even bigger the overhead generated with memory access. Trying to avoid this problem, some processors developers are using several techniques to improve the performance, as the use of cache memories. By the otherside, cache memories cannot supply all their needs, thats why its important some new technique that could use better the cache memory. Working on this problem, some authors are using reconfigurable computing to improve the cache memorys performance. This work analyses the reconfiguration of the cache memory associativity algorithm, and propose some improvements on this algorithm to better use its resources, showing some practical results from simulations with several cache organizations.
Avakian, Annie. "Reducing Cache Access Time in Multicore Architectures Using Hardware and Software Techniques." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1335461322.
Full textMesquita, Daniel Gomes. "Architectures Reconfigurables et Cryptographie : une analyse de robustesse face aux attaques par canaux cachés." Montpellier 2, 2006. http://www.theses.fr/2006MON20097.
Full textThis work addresses the reconfigurable architectures for cryptographic applications theme, emphasizing the robustness issue. Some mathematical background is reviewed, as well the state of art of reconfigurable architectures. Side channel attacks, specially the DPA and SPA attacks, are studied. As consequence, algorithmic, hardware and architectural countermeasures are proposed. A new parallel reconfigurable architecture is proposed to implement the Leak Resistant Arithmetic. This new architecture outperforms most of state of art circuits for modular exponentiation, but the main feature of this architecture is the robustness against DPA attacks
Gomes, Mesquita Daniel. "Architectures Reconfigurables et Cryptographie: Une Analyse de Robustesse et Contremesures Face aux Attaques par Canaux Cachés." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2006. http://tel.archives-ouvertes.fr/tel-00115736.
Full textcryptographie. Divers aspects sont étudiés, tels que les principes de base de la cryptographie,
l'arithmétique modulaire, les attaques matériaux et les architectures reconfigurables. Des méthodes
originales pour contrecarrer les attaques par canaux cachés, notamment la DPA, sont proposés.
L'architecture proposée est efficace du point de vue de la performance et surtout est robuste contre
la DPA.
Cuminato, Lucas Albers. "Otimização de memória cache em tempo de execução para o processador embarcado LEON3." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-22092014-161846/.
Full textEnergy consumption is one of the most important issues in embedded systems. Studies have shown that in this type of system the cache consumes most of the power supplied to the processor. In most embedded processors, the cache configuration parameters are fixed and do not allow changes after manufacture/synthesis. However, this is not the ideal scenario, since the configuration of the cache may not be suitable for a particular application, resulting in lower performance and excessive energy consumption. In this context, this project proposes a hardware implementation, using reconfigurable computing, able to reconfigure the parameters of the LEON3 processor\'s cache in run-time improving applications performance and reducing the power consumption of the system. The result of the experiment shows it is possible to reduce the processor\'s power consumption up to 5% with only 0.1% degradation in performance
Brogioli, Michael C. "Dynamically reconfigurable data caches in low-power computing." Thesis, 2003. http://hdl.handle.net/1911/17647.
Full textBandara, Sahan Lakshitha. "Investigating the viability of adaptive caches as a defense mechanism against cache side-channel attacks." Thesis, 2019. https://hdl.handle.net/2144/36079.
Full text2020-06-03T00:00:00Z
Barzegar, Ali. "Dynamically Reconfigurable Active Cache Modeling." Thesis, 2014. http://spectrum.library.concordia.ca/978188/1/Barzegar_MASc_S2014.pdf.
Full textLin, Chia Hao, and 林嘉豪. "On-Line Reconfigurable Cache for Embedded Systems." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/49976132696808727162.
Full text國立暨南國際大學
資訊工程學系
94
To reduce energy consumption, this work investigates an on-line reconfigurable cache architecture for embedded systems. First, some front-end instructions of the application are preloaded into the instruction cache before running to reduce the power consumption due to off-chip memory. Second, the way-prediction is adopted to reduce the power consumption due to the n-way set associative cache. Third, we propose a most-case optimal configuration searching algorithm that can operate faster and increase precision significantly. Even with the same application, different inputs would lead to different configurations. Based on this point, an on-line reconfigurable cache algorithm for different inputs is finally derived for searching an optimal configuration of the cache for each application for saving power near-real-time. Experimental results show that our non-on-line reconfigurable cache structure saves 12.86% of the total memory access energy over Zhang's. Furthermore, the proposed on-line reconfigurable cache can save 14.57% of memory access energy compared with the non-on-line one.
Jheng, Geng-Cyuan, and 鄭耕全. "Real-time Reconfigurable Cache for Low-Power Embedded Systems." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/39182178624040456726.
Full text國立暨南國際大學
資訊工程學系
96
Modern embedded systems execute a small set of applications or even a single one repeatedly. Specializing cache configurations to a particular application is well-known to have great benefits on performance and power. To reduce the searching for optimal cache configuration, a most-case optimal cache configuration searching algorithm for the entire execution of an application was proposed by Lin et al. in 2006 which greatly reduces the time and power in searching. However, the fact that the behavior of an application varies from phase to phase has been shown in recent years. Tuning cache configuration to fit a target application in different phases gives a further improvement in power consumption. This work presents a mechanism which determines the optimal configurations in different phases during an execution process. By dividing an execution process into small time intervals and applying corresponding local optimal cache configuration for each interval on L1 instruction cache, this work shows that on average 91.6% energy saving is obtained by comparing with average energy consumption of all four-way set-associative caches in search space. On average 5.29% power reduction is achieved by comparing with energy consumption of benchmarks with their respective global optimal cache configurations.
Jheng, Geng-Cyuan. "Real-time Reconfigurable Cache for Low-Power Embedded Systems." 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0020-3107200814582400.
Full textPeng, Cheng-Hao, and 彭政豪. "Design of a Reconfigurable Cache for Low-Power Embedded Systems." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/74241048008722334994.
Full text國立暨南國際大學
資訊工程學系
98
Modern embedded systems execute a small set of application or a single one repeatedly. Specializing cache configurations to a particular application is well-known to have great advantages on performance and energy saving. To reduce the searching for optimal cache configuration, a most-case optimal cache configuration searching algorithm was proposed which greatly reduces the times and power in searching. However, the fact that the behavior of an application varies from phase to phase has been shown in recent years. Tuning cache configuration to fit a specialized application in different phases gives a further improvement in power consumption. This work presents a mechanism which choices the optimal configurations in different phases during an execution process. By dividing an execution process into flexible time interval and applying corresponding local optimal cache configuration for each interval on L1 instruction cache. The experimental result shows that over 4.374% energy saving which compared with whole application divided into 64 intervals. On average 6.653% power reduction by dividing the whole application into flexible phases instead of slicing the entire application 1M instructions per phase.
Hsu, Po-Hao, and 許博豪. "Reconfigurable Cache Memory Mechanism for Integral Image andIntegral Histogram Applications." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/58964530730016983247.
Full text國立臺灣大學
電子工程學研究所
100
With the development of semiconductor technology, the capability of micro-processor doubles almost every 18 months, by the Moore''s Law. However, the speed of off-chip DRAM grows only 7% every year. There is a huge gap between the speed of CPU and DRAM. Cache memory is a high speed memory which can reduce the memory access latency between the processor and off-chip DRAM, and it usually occupies a large area of the whole system.However, for some operation with integral images and integral integral histograms, which are famous for getting an arbitrary-sized block summation and histogram in a constant speed and widely implemented in many applications, the read and write mechanisms of cache are not suitable for such algorithms with stream processing characteristic. With larger cache size, the cycle count can be further reduced. However, the analysis results show that there is a bottleneck of cache hit rate and the cycle count reduction meets a limitation. For the above reasons, in this thesis, a reconfigurable cache memory mechanism is proposed to support both general data access and stream processing. This proposed memory has two modes: normal cache mode and Row-Based Stream Processing (RBSP) mode, which is a specific mechanism for data accessing of integral images and integral histograms. The RBSP mode can reduce the cycle count because all the subsequent necessary data has been precisely prefetched with the basic accessing unit of an image row. Two integral image and integral histogram applications, SURF algorithm and center-surround histogram of salience map, are implemented to verify the proposed mechanism. Moreover, the data reuse scheme intra-filter-size sharing and inter-filter-size sharing between different filter sizes and diffierent filtering stripes are taken into consideration to further reduce the data access to the off-chip DRAM. A mapping algorithm is proposed to help the RBSP memory read and write the data, which is implemented in hardware and software versions. In addition, a method called Memory Dividing Technique (MDT) is also proposed to further reduce the word-length. The whole system is built in the Coware Platform Architect to verify our design. Our target image size is VGA 640 x 480 and the experimental results show that the proposed Reconfigurable RBSP memory can save 38.31% and 48.29% memory cycle count for these two applications compared to the traditional data cache in the same level of size. The hardware is implemented with Verilog-HDL and synthesized with Synopsys Design Compiler in TSMC 180nm technology. The total gate count of RBSP memory is 557.0K. The overhead of our proposed RBSP memory is very small, just 7.61% or 5.28% with hardware or software based implementation compared to the set associative cache.
Yang, Yun-Chung, and 楊允中. "A Reconfigurable Cache for Efficient Usage of the Tag RAM Space." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/7ctc27.
Full text國立中山大學
資訊工程學系研究所
102
In almost every typical SoCs (System-on-Chip) in modern days, the size of cache grows larger as new SoC fabrics enhanced to satisfy the variety of workloads. Cache occupies the whole chip area more than 60% in SoC. Most of the time, application does not use the entire cache space. Consequently, the underutilized cache space consume a certain power constantly without any contribution. Thus, some of the industry company in present day, starting to develop mechanisms to make the cache size reconfigurable. In recent work, an idea of scratchpad memory extends the turned-off part of cache space as local memory, also called SPM (scratchpad memory), which can benefit other activities to further increase the performance or enhance instruction delivery. However, SPM only uses the part of data RAMs, the tag RAMs part is still remaining un-used. In this work, we proposed an architecture that can exploit the SPM space by reusing tag RAMs in either instruction or data cache. Implementing the proposed architecture on an ARM compatible CPU data cache for case study. The experiment results show that we can reclaim 12.5% of memory space with 0.08% hardware overhead in the configuration of 4KB, 4 way-associative cache with 32 byte line size which is equivalent to ARM Cortex-A5.
Kim, Yoonjin. "DESIGNING COST-EFFECTIVE COARSE-GRAINED RECONFIGURABLE ARCHITECTURE." 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-05-649.
Full text