Academic literature on the topic 'Cache memory. Algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cache memory. Algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Cache memory. Algorithms"

1

Prihozhy, A. A. "Simulation of direct mapped, k-way and fully associative cache on all pairs shortest paths algorithms." «System analysis and applied information science», no. 4 (December 30, 2019): 10–18. http://dx.doi.org/10.21122/2309-4923-2019-4-10-18.

Full text
Abstract:
Caches are intermediate level between fast CPU and slow main memory. It aims to store copies of frequently used data and to reduce the access time to the main memory. Caches are capable of exploiting temporal and spatial localities during program execution. When the processor accesses memory, the cache behavior depends on if the data is in cache: a cache hit occurs if it is, and, a cache miss occurs, otherwise. In the last case, the cache may have to evict other data. The misses produce processor stalls and slow down the computations. The replacement policy chooses a data to evict, trying to p
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Wei, and Xiaoyang Zeng. "Decision Tree-Based Adaptive Reconfigurable Cache Scheme." Algorithms 14, no. 6 (2021): 176. http://dx.doi.org/10.3390/a14060176.

Full text
Abstract:
Applications have different preferences for caches, sometimes even within the different running phases. Caches with fixed parameters may compromise the performance of a system. To solve this problem, we propose a real-time adaptive reconfigurable cache based on the decision tree algorithm, which can optimize the average memory access time of cache without modifying the cache coherent protocol. By monitoring the application running state, the cache associativity is periodically tuned to the optimal cache associativity, which is determined by the decision tree model. This paper implements the pr
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Tian, Wei Zhang, Tao Xu, and Guan Wang. "Research and Analysis of Design and Optimization of Magnetic Memory Material Cache Based on STT-MRAM." Key Engineering Materials 815 (August 2019): 28–34. http://dx.doi.org/10.4028/www.scientific.net/kem.815.28.

Full text
Abstract:
This paper proposes a cache replacement algorithm based on STT-MRAM magnetic memory, which aims to make the material system based on STT-MRAM magnetic memory better used. The algorithm replaces the data blocks in the cache by considering the position of the STT-MRAM magnetic memory head and the hardware characteristics of the STT-MRAM magnetic memory. This method will be different from the traditional magnetic memory-based common cache replacement algorithm. Traditional replacement algorithms are generally designed with only the algorithm to improve the cache, and the hardware characteristics
APA, Harvard, Vancouver, ISO, and other styles
4

Vishnekov, A. V., and E. M. Ivanova. "DYNAMIC CONTROL METHODS OF CACHE LINES REPLACEMENT POLICY." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 191 (May 2020): 49–56. http://dx.doi.org/10.14489/vkit.2020.05.pp.049-056.

Full text
Abstract:
The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequentia
APA, Harvard, Vancouver, ISO, and other styles
5

Vishnekov, A. V., and E. M. Ivanova. "DYNAMIC CONTROL METHODS OF CACHE LINES REPLACEMENT POLICY." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 191 (May 2020): 49–56. http://dx.doi.org/10.14489/vkit.2020.05.pp.049-056.

Full text
Abstract:
The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequentia
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Yao, Jian Dong, Hongwei Liu, Jin Wu, and Yanxin Liu. "Performance Improvement of DAG-Aware Task Scheduling Algorithms with Efficient Cache Management in Spark." Electronics 10, no. 16 (2021): 1874. http://dx.doi.org/10.3390/electronics10161874.

Full text
Abstract:
Directed acyclic graph (DAG)-aware task scheduling algorithms have been studied extensively in recent years, and these algorithms have achieved significant performance improvements in data-parallel analytic platforms. However, current DAG-aware task scheduling algorithms, among which HEFT and GRAPHENE are notable, pay little attention to the cache management policy, which plays a vital role in in-memory data-parallel systems such as Spark. Cache management policies that are designed for Spark exhibit poor performance in DAG-aware task-scheduling algorithms, which leads to cache misses and perf
APA, Harvard, Vancouver, ISO, and other styles
7

P, Pratheeksha, and Revathi S. A. "Machine Learning-Based Cache Replacement Policies: A Survey." International Journal of Engineering and Advanced Technology 10, no. 6 (2021): 19–22. http://dx.doi.org/10.35940/ijeat.f2907.0810621.

Full text
Abstract:
Despite extensive developments in improving cache hit rates, designing an optimal cache replacement policy that mimics Belady’s algorithm still remains a challenging task. Existing standard static replacement policies does not adapt to the dynamic nature of memory access patterns, and the diversity of computer programs only exacerbates the problem. Several factors affect the design of a replacement policy such as hardware upgrades, memory overheads, memory access patterns, model latency, etc. The amalgamation of a fundamental concept like cache replacement with advanced machine learning algori
APA, Harvard, Vancouver, ISO, and other styles
8

Пуйденко, Вадим Олексійович. "АВТОМАТНА МОДЕЛЬ, СИНТЕЗ ПРИСТРОЮ ТА АДАПТИВНОГО АЛГОРИТМУ ЗАМІЩЕННЯ ДЛЯ КЕШ-ПАМ’ЯТІ". RADIOELECTRONIC AND COMPUTER SYSTEMS, № 4 (27 листопада 2020): 68–78. http://dx.doi.org/10.32620/reks.2020.4.06.

Full text
Abstract:
The probability indicators of the hits or misses events have conditioned the application of the certain substitution policies in the associative cache and the associative translation look-a-side buffer. The implementation of combined substitution policies can improve cache memory and cache buffer performance in general by the interoperability of algorithms with unidirectional or multidirectional substitution policies with the ability to switch from one policy to another. Adaptation of substitution algorithms is based on the compatibility of algorithms according to several characteristics, such
APA, Harvard, Vancouver, ISO, and other styles
9

Akbari-Bengar, Davood, Ali Ebrahimnejad, Homayun Motameni, and Mehdi Golsorkhtabaramiri. "Improving of cache memory performance based on a fuzzy clustering based page replacement algorithm by using four features." Journal of Intelligent & Fuzzy Systems 39, no. 5 (2020): 7899–908. http://dx.doi.org/10.3233/jifs-201360.

Full text
Abstract:
Internet is one of the most influential new communication technologies has influenced all aspects of human life. Extensive use of the Internet and the rapid growth of network services have increased network traffic and ultimately a slowdown in internet speeds around the world. Such traffic causes reduced network bandwidth, server response latency, and increased access time to web documents. Cache memory is used to improve CPU performance and reduce response time. Due to the cost and limited size of cache compared to other devices that store information, an alternative policy is used to select
APA, Harvard, Vancouver, ISO, and other styles
10

Zavadskyi, I. O. "Pattern matching by the terms of cache memory limitations." Bulletin of Taras Shevchenko National University of Kyiv. Series: Physics and Mathematics, no. 3 (2019): 56–59. http://dx.doi.org/10.17721/1812-5409.2019/3.8.

Full text
Abstract:
A few known techniques of exact pattern matching, such as 2-byte read, skip loop, and sliding search windows, are improved and applied to pattern matching algorithms, performing over 256-ary alphabets. Instead of 2-byte read, we offer “1.5-byte read”, i.e. reading more than 8 but less than 16 bits of two sequential bytes of a text at each iteration of a search loop. This allows us to fit the search table into L1 cache memory, which significantly improves the algorithm performance. Also, we introduce the so-called double skip loop instead of single one, resolve problems caused by endianness of
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Cache memory. Algorithms"

1

Fix, James D. "Cache performance analysis of algorithms /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Furis, Mihai Alexandru Johnson Jeremy. "Cache miss analysis of Walsh-Hadamard Transform algorithms /." Philadelphia : Drexel University, 2003. http://dspace.library.drexel.edu/handle/1721.1/109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liang, Shuang. "Algorithms Designs and Implementations for Page Allocation in SSD Firmware and SSD Caching in Storage Systems." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1268420517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chae, Youngsu. "Algorithms, protocols and services for scalable multimedia streaming." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/8148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Slocum, Joshua Foster. "Performance analysis of cache oblivious Algorithms in the Fresh Breeze memory model." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/76998.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 31-32).<br>The Fresh Breeze program execution model was designed for easy, reliable and massively scalable parallel performance. The model achieves these goals by combining a radical memory model with efficient fine-grain parallelsim and managing both in hardware. This presents a unique opportunity for studying program execution in a system whose memory behavior is not well understood. In this th
APA, Harvard, Vancouver, ISO, and other styles
6

Korupolu, Madhukar. "Placement algorithms for hierarchical cooperative caching and other location problems /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
Abstract:
Thesis (Ph. D.)--University of Texas at Austin, 1999.<br>Vita. Includes bibliographical references (leaves 143-150), Copy 2 (p. 135-142). Available also in a digital version from Dissertation Abstracts.
APA, Harvard, Vancouver, ISO, and other styles
7

Lindqvist, Maria. "Dynamic Eviction Set Algorithms and Their Applicability to Cache Characterisation." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420317.

Full text
Abstract:
Eviction sets are groups of memory addresses that map to the same cache set. They can be used to perform efficient information-leaking attacks against the cache memory, so-called cache side channel attacks. In this project, two different algorithms that find such sets are implemented and compared. The second of the algorithms improves on the first by using a concept called group testing. It is also evaluated if these algorithms can be used to analyse or reverse engineer the cache characteristics, which is a new area of application for this type of algorithms. The results show that the optimise
APA, Harvard, Vancouver, ISO, and other styles
8

Kamath, Akash S. "An efficient algorithm for caching online analytical processing objects in a distributed environment." Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1174678903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pottier, Loïc. "Co-scheduling for large-scale applications : memory and resilience." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN039/document.

Full text
Abstract:
Cette thèse explore les problèmes liés à l'ordonnancement concurrent dans le contexte des applications massivement parallèle, de deux points de vue: le coté mémoire (en particulier la mémoire cache) et le coté tolérance aux fautes.Avec l'avènement récent des architectures dites many-core, tels que les récents processeurs multi-coeurs, le nombre d'unités de traitement augmente de manière importante.Dans ce contexte, les avantages fournis par les techniques d'ordonnancements concurrents ont été démontrés à travers de nombreuses études.L'ordonnancement concurrent, aussi appelé co-ordonnancement,
APA, Harvard, Vancouver, ISO, and other styles
10

Žádník, Martin. "Optimalizace sledování síťových toků." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-261221.

Full text
Abstract:
The thesis deals with optimization of network flow monitoring. Flow-based network traffic processing, that is, processing packets based on some state information associated to the flows which the packets belong to, is a key enabler for a variety of network services and applications. The number of simultaneous flows increases with the growing number of new services and applications. It has become a challenge to keep a state per each flow in a network device processing high speed traffic. A flow table, a structure with flow states, must be stored in a memory hierarchy. The memory closest to the
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Cache memory. Algorithms"

1

Nicol, David. Massively parallel algorithms for trace-driven cache simulations. National Aeronautics and Space Administration, Langley Research Center, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Cache memory. Algorithms"

1

Kumar, Piyush. "Cache Oblivious Algorithms." In Algorithms for Memory Hierarchies. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36574-5_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kowarschik, Markus, and Christian Weiß. "An Overview of Cache Optimization Techniques and Cache-Aware Numerical Algorithms." In Algorithms for Memory Hierarchies. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36574-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Monazzah, Amir Mahdi Hosseini, Amir M. Rahmani, Antonio Miele, and Nikil Dutt. "Exploiting Memory Resilience for Emerging Technologies: An Energy-Aware Resilience Exemplar for STT-RAM Memories." In Dependable Embedded Systems. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52017-5_21.

Full text
Abstract:
AbstractDue to the consistent pressing quest of larger on-chip memories and caches of multicore and manycore architectures, Spin Transfer Torque Magnetic RAM (STT-MRAM or STT-RAM) has been proposed as a promising technology to replace classical SRAMs in near-future devices. Main advantages of STT-RAMs are a considerably higher transistor density and a negligible leakage power compared with SRAM technology. However, the drawback of this technology is the high probability of errors occurring especially in write operations. Such errors are asymmetric and transition-dependent, where 0 → 1 is the most critical one, and is high subjected to the amount and current (voltage) supplied to the memory during the write operation. As a consequence, STT-RAMs present an intrinsic trade-off between energy consumption vs. reliability that needs to be properly tuned w.r.t. the currently running application and its reliability requirement. This chapter proposes FlexRel, an energy-aware reliability improvement architectural scheme for STT-RAM cache memories. FlexRel considers a memory architecture provided with Error Correction Codes (ECCs) and a custom current regulator for the various cache ways and conducts a trade-off between reliability and energy consumption. FlexRel cache controller dynamically profiles the number of 0 → 1 transitions of each individual bit write operation in a cache block and based on that selects the most-suitable cache way and current level to guarantee the necessary error rate threshold (in terms of occurred write errors) while minimizing the energy consumption. We experimentally evaluated the efficiency of FlexRel against the most efficient uniform protection scheme from reliability, energy, area, and performance perspectives. Experimental simulations performed by using gem5 has demonstrated that while FlexRel satisfies the given error rate threshold, it delivers up to 13.2% energy saving. From the area footprint perspective, FlexRel delivers up to 7.9% cache ways’ area saving. Furthermore, the performance overhead of the FlexRel algorithm which changes the traffic patterns of the cache ways during the executions is 1.7%, on average.
APA, Harvard, Vancouver, ISO, and other styles
4

Rahman, Naila. "Algorithms for Hardware Caches and TLB." In Algorithms for Memory Hierarchies. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36574-5_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ghandeharizadeh, Shahram, Sandy Irani, and Jenny Lam. "Cache Replacement with Memory Allocation." In 2015 Proceedings of the Seventeenth Workshop on Algorithm Engineering and Experiments (ALENEX). Society for Industrial and Applied Mathematics, 2014. http://dx.doi.org/10.1137/1.9781611973754.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sanders, Peter. "Fast Priority Queues for Cached Memory." In Algorithm Engineering and Experimentation. Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48518-x_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oh, Chansoo, Dong Hyun Kang, Minho Lee, and Young Ik Eom. "A Buffer Cache Algorithm for Hybrid Memory Architecture in Mobile Devices." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-38904-2_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Naeem, M. Asif, and Noreen Jamil. "Online Processing of End-User Data in Real-Time Data Warehousing." In Improving Knowledge Discovery through the Integration of Data Mining Techniques. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8513-0.ch002.

Full text
Abstract:
Stream-based join algorithms are a promising technology for modern real-time data warehouses. A particular category of stream-based joins is a semi-stream join where a single stream is joined with a disk based master data. The join operator typically works under limited main memory and this memory is generally not large enough to hold the whole disk-based master data. Recently, a seminal join algorithm called MESHJOIN (Mesh Join) has been proposed in the literature to process semi-stream data. MESHJOIN is a candidate for a resource-aware system setup. However, MESHJOIN is not very selective. In particular, MESHJOIN does not consider the characteristics of stream data and its performance is suboptimal for skewed stream data. This chapter presents a novel Cached-based Semi-Stream Join (CSSJ) using a cache module. The algorithm is more appropriate for skewed distributions, and we present results for Zipfian distributions of the type that appear in many applications. We conduct a rigorous experimental study to test our algorithm. Our experiments show that CSSJ outperforms MESHJOIN significantly. We also present the cost model for our CSSJ and validate it with experiments.
APA, Harvard, Vancouver, ISO, and other styles
9

Pahikkala, Tapio, Antti Airola, Thomas Canhao Xu, Pasi Liljeberg, Hannu Tenhunen, and Tapio Salakoski. "On Parallel Online Learning for Adaptive Embedded Systems." In Advances in Systems Analysis, Software Engineering, and High Performance Computing. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-6034-2.ch011.

Full text
Abstract:
This chapter considers parallel implementation of the online multi-label regularized least-squares machine-learning algorithm for embedded hardware platforms. The authors focus on the following properties required in real-time adaptive systems: learning in online fashion, that is, the model improves with new data but does not require storing it; the method can fully utilize the computational abilities of modern embedded multi-core computer architectures; and the system efficiently learns to predict several labels simultaneously. They demonstrate on a hand-written digit recognition task that the online algorithm converges faster, with respect to the amount of training data processed, to an accurate solution than a stochastic gradient descent based baseline. Further, the authors show that our parallelization of the method scales well on a quad-core platform. Moreover, since Network-on-Chip (NoC) has been proposed as a promising candidate for future multi-core architectures, they implement a NoC system consisting of 16 cores. The proposed machine learning algorithm is evaluated in the NoC platform. Experimental results show that, by optimizing the cache behaviour of the program, cache/memory efficiency can improve significantly. Results from the chapter provide a guideline for designing future embedded multi-core machine learning devices.
APA, Harvard, Vancouver, ISO, and other styles
10

Petersen, Wesley, and Peter Arbenz. "Basic Issues." In Introduction to Parallel Computing. Oxford University Press, 2004. http://dx.doi.org/10.1093/oso/9780198515760.003.0006.

Full text
Abstract:
Since first proposed by Gordon Moore (an Intel founder) in 1965, his law [107] that the number of transistors on microprocessors doubles roughly every one to two years has proven remarkably astute. Its corollary, that central processing unit (CPU) performance would also double every two years or so has also remained prescient. Figure 1.1 shows Intel microprocessor data on the number of transistors beginning with the 4004 in 1972. Figure 1.2 indicates that when one includes multi-processor machines and algorithmic development, computer performance is actually better than Moore’s 2-year performance doubling time estimate. Alas, however, in recent years there has developed a disagreeable mismatch between CPU and memory performance: CPUs now outperform memory systems by orders of magnitude according to some reckoning [71]. This is not completely accurate, of course: it is mostly a matter of cost. In the 1980s and 1990s, Cray Research Y-MP series machines had well balanced CPU to memory performance. Likewise, NEC (Nippon Electric Corp.), using CMOS (see glossary, Appendix F) and direct memory access, has well balanced CPU/Memory performance. ECL (see glossary, Appendix F) and CMOS static random access memory (SRAM) systems were and remain expensive and like their CPU counterparts have to be carefully kept cool. Worse, because they have to be cooled, close packing is difficult and such systems tend to have small storage per volume. Almost any personal computer (PC) these days has a much larger memory than supercomputer memory systems of the 1980s or early 1990s. In consequence, nearly all memory systems these days are hierarchical, frequently with multiple levels of cache. Figure 1.3 shows the diverging trends between CPUs and memory performance. Dynamic random access memory (DRAM) in some variety has become standard for bulk memory. There are many projects and ideas about how to close this performance gap, for example, the IRAM [78] and RDRAM projects [85]. We are confident that this disparity between CPU and memory access performance will eventually be tightened, but in the meantime, we must deal with the world as it is.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Cache memory. Algorithms"

1

Nimako, Gideon, E. J. Otoo, and Daniel Ohene-Kwofie. "Cache-sensitive MapReduce DGEMM algorithms for shared memory architectures." In the South African Institute for Computer Scientists and Information Technologists Conference. ACM Press, 2012. http://dx.doi.org/10.1145/2389836.2389849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, Akhilesh, and Laxmi N. Bhuyan. "Parallel FFT Algorithms for Cache Based Shared Memory Multiprocessors." In 1993 International Conference on Parallel Processing - ICPP'93 Vol3. IEEE, 1993. http://dx.doi.org/10.1109/icpp.1993.136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kapoor, Bhanu, and Patrick W. Bosshart. "Cache parameters and memory power consumption of video algorithms." In Photonics West '98 Electronic Imaging, edited by Sethuraman Panchanathan, Frans Sijstermans, and Subramania I. Sudharsanan. SPIE, 1998. http://dx.doi.org/10.1117/12.304671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kuszmaul, William, and Alek Westover. "Cache-Efficient Parallel-Partition Algorithms using Exclusive-Read-and-Write Memory." In SPAA '20: 32nd ACM Symposium on Parallelism in Algorithms and Architectures. ACM, 2020. http://dx.doi.org/10.1145/3350755.3400234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hac, A. "Sensitivity study of asynchronous algorithms in disk buffer cache memory." In Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences. IEEE, 1992. http://dx.doi.org/10.1109/hicss.1992.183149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Silvestre, Iago, and Leandro Becker. "Performance Analysis of Embedded Control Algorithms used in UAVs." In Simpósio Brasileiro de Engenharia de Sistemas Computacionais. Sociedade Brasileira de Computação, 2020. http://dx.doi.org/10.5753/sbesc_estendido.2020.13110.

Full text
Abstract:
Performance analysis of embedded systems is critical when dealing with Cyber-Physical Systems that require stability guarantees. They typically operate having to respect deadlines imposed during the design of the related control system. In a recent past performance analysis was typically done only by executing the code, and making measures, on the target embedded platform. Nowadays, code execution/measuring can also be done on simulation software, which offers greater degree of liberty for designers to configure the system for the desired tests. This paper presents results obtained from analyz
APA, Harvard, Vancouver, ISO, and other styles
7

Pavan, Pablo José, Matheus da Silva Serpa, Víctor Martínez, Edson Luiz Padoin, Jairo Panetta, and Philippe O. A. Navaux. "Strategies to Improve the Performance and Energy Efficiency of Stencil Computations for NVIDIA GPUs." In XVII Workshop em Desempenho de Sistemas Computacionais e de Comunicação. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/wperformance.2018.3348.

Full text
Abstract:
Energy and performance of parallel systems are an increasing concern for new large-scale systems. Research has been developed in response to this challenge aiming the manufacture of more energy efficient systems. In this context, we improved the performance and achieved energy efficiency by the development of three different strategies which use the GPU memory subsystem (global-, shared-, and read-only- memory). We also develop two optimizations to use data locality and use of registers of GPU architecture. Our developed optimizations were applied to GPU algorithms for stencil applications ach
APA, Harvard, Vancouver, ISO, and other styles
8

Mingardi, William B., and Gustavo M. D. Vieira. "Characterizing Synchronous Writes in Stable Memory Devices." In XVIII Workshop em Desempenho de Sistemas Computacionais e de Comunicação. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/wperformance.2019.6458.

Full text
Abstract:
Distributed algorithms that operate in the fail-recovery model rely on the state stored in stable memory to guarantee the irreversibility of operations even in the presence of failures. The performance of these algorithms lean heavily on the performance of stable memory. Current storage technologies have a defined performance profile: data is accessed in blocks of hundreds or thousands of bytes, random access to these blocks is expensive and sequential access is somewhat better. File system implementations hide some of the perfor- mance limitations of the underlying storage devices using buffe
APA, Harvard, Vancouver, ISO, and other styles
9

Alghazo, J., A. Akaaboune, and N. Botros. "SF-LRU cache replacement algorithm." In Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004. IEEE, 2004. http://dx.doi.org/10.1109/mtdt.2004.1327979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

A. Gunathilake, Nilupulee, Ahmed Al-Dubai, William J. Buchanan, and Owen Lo. "Electromagnetic Analysis of an Ultra-Lightweight Cipher: PRESENT." In 10th International Conference on Information Technology Convergence and Services (ITCSE 2021). AIRCC Publishing Corporation, 2021. http://dx.doi.org/10.5121/csit.2021.110915.

Full text
Abstract:
Side-channel attacks are an unpredictable risk factor in cryptography. Therefore, continuous observations of physical leakages are essential to minimise vulnerabilities associated with cryptographic functions. Lightweight cryptography is a novel approach in progress towards internet-of-things (IoT) security. Thus, it would provide sufficient data and privacy protection in such a constrained ecosystem. IoT devices are resource-limited in terms of data rates (in kbps), power maintainability (battery) as well as hardware and software footprints (physical size, internal memory, RAM/ROM). Due to th
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!