Academic literature on the topic 'Computer Memory Architecture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computer Memory Architecture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computer Memory Architecture"

1

Choi, Yongseok, Eunji Lim, Jaekwon Shin, and Cheol-Hoon Lee. "MemBox: Shared Memory Device for Memory-Centric Computing Applicable to Deep Learning Problems." Electronics 10, no. 21 (November 8, 2021): 2720. http://dx.doi.org/10.3390/electronics10212720.

Full text
Abstract:
Large-scale computational problems that need to be addressed in modern computers, such as deep learning or big data analysis, cannot be solved in a single computer, but can be solved with distributed computer systems. Since most distributed computing systems, consisting of a large number of networked computers, should propagate their computational results to each other, they can suffer the problem of an increasing overhead, resulting in lower computational efficiencies. To solve these problems, we proposed an architecture of a distributed system that used a shared memory that is simultaneously accessible by multiple computers. Our architecture aimed to be implemented in FPGA or ASIC. Using an FPGA board that implemented our architecture, we configured the actual distributed system and showed the feasibility of our system. We compared the results of the deep learning application test using our architecture with that using Google Tensorflow’s parameter server mechanism. We showed improvements in our architecture beyond Google Tensorflow’s parameter server mechanism and we determined the future direction of research by deriving the expected problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Pancratov, Cosmin, Jacob M. Kurzer, Kelly A. Shaw, and Matthew L. Trawick. "Why Computer Architecture Matters: Memory Access." Computing in Science & Engineering 10, no. 4 (July 2008): 71–75. http://dx.doi.org/10.1109/mcse.2008.106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Əzizxan oğlu Eyyubov, Ramazan, Leyla Elxan qızı Bayramova, and Zeynəb Mirsəməd qızı Sadıqova. "Computer architecture and John von Neumann principles." SCIENTIFIC WORK 15, no. 2 (March 9, 2021): 11–15. http://dx.doi.org/10.36719/2663-4619/63/11-15.

Full text
Abstract:
The program is stored in the machine's memory from any external device. The control device organizes its execution, taking into account the program in memory. The mathematical-logical device performs mathematical and logical calculations in accordance with the entered commands. Thus, the computer performs calculations without human assistance. Key words: computer, software, device, information, scheme
APA, Harvard, Vancouver, ISO, and other styles
4

Yantır, Hasan Erdem, Ahmed M. Eltawil, and Khaled N. Salama. "Efficient Acceleration of Stencil Applications through In-Memory Computing." Micromachines 11, no. 6 (June 26, 2020): 622. http://dx.doi.org/10.3390/mi11060622.

Full text
Abstract:
The traditional computer architectures severely suffer from the bottleneck between the processing elements and memory that is the biggest barrier in front of their scalability. Nevertheless, the amount of data that applications need to process is increasing rapidly, especially after the era of big data and artificial intelligence. This fact forces new constraints in computer architecture design towards more data-centric principles. Therefore, new paradigms such as in-memory and near-memory processors have begun to emerge to counteract the memory bottleneck by bringing memory closer to computation or integrating them. Associative processors are a promising candidate for in-memory computation, which combines the processor and memory in the same location to alleviate the memory bottleneck. One of the applications that need iterative processing of a huge amount of data is stencil codes. Considering this feature, associative processors can provide a paramount advantage for stencil codes. For demonstration, two in-memory associative processor architectures for 2D stencil codes are proposed, implemented by both emerging memristor and traditional SRAM technologies. The proposed architecture achieves a promising efficiency for a variety of stencil applications and thus proves its applicability for scientific stencil computing.
APA, Harvard, Vancouver, ISO, and other styles
5

Waterson, Clare, and B. Keith Jenkins. "Shared-memory optical/electronic computer: architecture and control." Applied Optics 33, no. 8 (March 10, 1994): 1559. http://dx.doi.org/10.1364/ao.33.001559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

AKL, SELIM G. "THREE COUNTEREXAMPLES TO DISPEL THE MYTH OF THE UNIVERSAL COMPUTER." Parallel Processing Letters 16, no. 03 (September 2006): 381–403. http://dx.doi.org/10.1142/s012962640600271x.

Full text
Abstract:
It is shown that the concept of a Universal Computer cannot be realized. Specifically, instances of a computable function [Formula: see text] are exhibited that cannot be computed on any machine [Formula: see text] that is capable of only a finite and fixed number of operations per step. This remains true even if the machine [Formula: see text] is endowed with an infinite memory and the ability to communicate with the outside world while it is attempting to compute [Formula: see text]. It also remains true if, in addition, [Formula: see text] is given an indefinite amount of time to compute [Formula: see text]. This result applies not only to idealized models of computation, such as the Turing Machine and the like, but also to all known general-purpose computers, including existing conventional computers (both sequential and parallel), as well as contemplated unconventional ones such as biological and quantum computers. Even accelerating machines (that is, machines that increase their speed at every step) cannot be universal.
APA, Harvard, Vancouver, ISO, and other styles
7

MILES, COE F., and DAVID ROGERS. "A BIOLOGICALLY MOTIVATED ASSOCIATIVE MEMORY ARCHITECTURE." International Journal of Neural Systems 04, no. 02 (June 1993): 109–27. http://dx.doi.org/10.1142/s0129065793000110.

Full text
Abstract:
A synthesis of analytical techniques from the fields of biology, mathematics, computer science and engineering are used to model the information processing characteristics of the mammalian cerebellar cortex. By viewing anatomically different neurons as representing network elements whose input-output functions are different, a mechanism for distributing information throughout the memory is proposed. The functional circuitry developed to implement this feature is called the microcircuit. Overlapping microcircuit activity is used to describe the memory's read and write operations. Key features of the memory model include: (1) its use of a sparse interconnection network, (2) its ability to manipulate very large input patterns, (3) its distributed storage of input data patterns and (4) its statistical reconstruction of stored patterns during memory read operations. Quantitative measures for the memory's recall fidelity and storage capacity are derived and results of computer simulations are presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Jacobson, Peter, Bo Kågström, and Mikael Rännar. "Algorithm Development for Distributed Memory Multicomputers Using CONLAB." Scientific Programming 1, no. 2 (1992): 185–203. http://dx.doi.org/10.1155/1992/365325.

Full text
Abstract:
CONLAB (CONcurrent LABoratory) is an environment for developing algorithms for parallel computer architectures and for simulating different parallel architectures. A user can experimentally verify and obtain a picture of the real performance of a parallel algorithm executing on a simulated target architecture. CONLAB gives a high-level support for expressing computations and communications in a distributed memory multicomputer (DMM) environment. A development methodology for DMM algorithms that is based on different levels of abstraction of the problem, the target architecture, and the CONLAB language itself is presented and illustrated with two examples. Simulotion results for and real experiments on the Intel iPSC/2 hypercube are presented. Because CONLAB is developed to run on uniprocessor UNIX workstations, it is an educational tool that offers interactive (simulated) parallel computing to a wide audience.
APA, Harvard, Vancouver, ISO, and other styles
9

Jan, Yahya, and Lech Jóźwiak. "Communication and Memory Architecture Design of Application-Specific High-End Multiprocessors." VLSI Design 2012 (March 25, 2012): 1–20. http://dx.doi.org/10.1155/2012/794753.

Full text
Abstract:
This paper is devoted to the design of communication and memory architectures of massively parallel hardware multiprocessors necessary for the implementation of highly demanding applications. We demonstrated that for the massively parallel hardware multiprocessors the traditionally used flat communication architectures and multi-port memories do not scale well, and the memory and communication network influence on both the throughput and circuit area dominates the processors influence. To resolve the problems and ensure scalability, we proposed to design highly optimized application-specific hierarchical and/or partitioned communication and memory architectures through exploring and exploiting the regularity and hierarchy of the actual data flows of a given application. Furthermore, we proposed some data distribution and related data mapping schemes in the shared (global) partitioned memories with the aim to eliminate the memory access conflicts, as well as, to ensure that our communication design strategies will be applicable. We incorporated these architecture synthesis strategies into our quality-driven model-based multi-processor design method and related automated architecture exploration framework. Using this framework, we performed a large series of experiments that demonstrate many various important features of the synthesized memory and communication architectures. They also demonstrate that our method and related framework are able to efficiently synthesize well scalable memory and communication architectures even for the high-end multiprocessors. The gains as high as 12-times in performance and 25-times in area can be obtained when using the hierarchical communication networks instead of the flat networks. However, for the high parallelism levels only the partitioned approach ensures the scalability in performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Rez, Peter, and D. J. Fathers. "Computer system architecture for image and spectral processing." Proceedings, annual meeting, Electron Microscopy Society of America 45 (August 1987): 92–95. http://dx.doi.org/10.1017/s0424820100125415.

Full text
Abstract:
In this paper we shall discuss digital imaging and spectroscopy systems from the perspective of a system designer and we shall concentrate on those design choices that limit performance in microscopy and analysis applications. The hardware of a computer system can be broken down into three main components. These are the processor which performs arithmetic and logical operations, the memory for storing data and instructions and the peripherals for long term data storage (disks, tapes) and communication with the outside world. Linking these components is a data highway or bus for passing digital information from one section of the machine to another. A good definition of a bus is a set of interconnections with a defined procedure (protocol) for information transmission. In many small systems the bus is not only a set of electrical connections but is also an enclosure (a backplane) into which the different modules (processor, memory, peripheral controllers) are added.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Computer Memory Architecture"

1

Scrbak, Marko. "Methodical Evaluation of Processing-in-Memory Alternatives." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1505199/.

Full text
Abstract:
In this work, I characterized a series of potential application kernels using a set of architectural and non-architectural metrics, and performed a comparison of four different alternatives for processing-in-memory cores (PIMs): ARM cores, GPGPUs, coarse-grained reconfigurable dataflow (DF-PIM), and a domain specific architecture using SIMD PIM engine consisting of a series of multiply-accumulate circuits (MACs). For each PIM alternative I investigated how performance and energy efficiency changes with respect to a series of system parameters, such as memory bandwidth and latency, number of PIM cores, DVFS states, cache architecture, etc. In addition, I compared the PIM core choices for a subset of applications and discussed how the application characteristics correlate to the achieved performance and energy efficiency. Furthermore, I compared the PIM alternatives to a host-centric solution that uses a traditional server-class CPU core or PIM-like cores acting as host-side accelerators instead of being part of 3D-stacked memories. Such insights can expose the achievable performance limits and shortcomings of certain PIM designs and show sensitivity to a series of system parameters (available memory bandwidth, application latency and bandwidth sensitivity, etc.). In addition, identifying the common application characteristics for PIM kernels provides opportunity to identify similar types of computation patterns in other applications and allows us to create a set of applications which can then be used as benchmarks for evaluating future PIM design alternatives.
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Joonwon. "Architectural features for Scalable shared memory multiprocessors." Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/8200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rixner, Scott. "Memory system architecture for real-time multitasking systems." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rankin, Linda J. "A dual-ported real memory architecture for the g-machine." Full text open access at:, 1986. http://content.ohsu.edu/u?/etd,117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chi, Hsiang. "Flash memory boot block architecture for safe firmware updates." FIU Digital Commons, 1995. http://digitalcommons.fiu.edu/etd/2160.

Full text
Abstract:
The most significant risk of updating embedded system code is the possible loss of system firmware during the update process. If the firmware is lost, the system will cease to operate, which can be very costly to the end user. This thesis is concerned with exploring alternate architectures which exploit the integration of flash memory technology in order to overcome this problem. Three design models and associated software techniques will be presented. These design models are described in detail in terms of the strategies they employ in order to prevent system lockup and the loss of firmware. The most important objective, which is addressed in the third model, is to ensure that the system can continue to process interrupts during the update. In addition, a portion of this research was aimed at providing the capability to perform updates remotely, and at maximizing system code memory space and available system RAM.
APA, Harvard, Vancouver, ISO, and other styles
6

Khurana, Harneet Singh. "Memory and data communication link architecture for micro-implants." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/57686.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 89).
With the growing need for the development of smaller implantable monitors, alternative energy storage sources such as high density ultra capacitors are envisioned to replace the bulky batteries in these devices. Ultracapacitors have the potential to be integrated on a silicon wafer, and have the benefits of an unlimited number of recharge cycles and extremely rapid recharging times. However, they present an essential challenge in that the voltage drops rapidly with energy drain. In this thesis, we explore a data storage memory that is compatible with ultracapacitor energy storage. In addition, we propose and demonstrate a low-power wireless link that exploits RFID techniques as a way of uploading the stored data.
by Harneet Singh Khurana.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Mui, Eric Y. (Eric Yeeming) 1976. "Optimizing memory access for the Architecture Exploration System (ARIES)." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kamolpornwijit, Witchakorn. "P-TAXI : enforcing memory safety with programmable tagged architecture." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105996.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 104-112).
Buffer overflow is a well-known problem that remains a threat to software security. With the advancement of code-reuse attacks and return-oriented programming (ROP), it becomes problematic to protect a program from being compromised. Several defenses have been developed in an attempt to defeat code-reuse attacks. However, there is still no solution that provides complete protection with low overhead. In this thesis, we improved TAXI, a ROP defense technique that utilizes a tagged architecture to prevent memory violations. Inspired by Programmable Unit for Metadata Processing (PUMP), we modified TAXI so that enforcement policies can be programmed by user-level code and called it P-TAXI (Programmable TAXI). We demonstrated that, by using P-TAXI, we were able to enforce memory safety policies, including return address protection, stack garbage collection, and memory compartmentalization. In addition, we showed that P-TAXI can be used for debugging and taint tracking.
by Witchakorn Kamolpornwijit.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
9

Ainsworth, Sam. "Prefetching for complex memory access patterns." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277804.

Full text
Abstract:
Modern-day workloads, particularly those in big data, are heavily memory-latency bound. This is because of both irregular memory accesses, which have no discernable pattern in their memory addresses, and large data sets that cannot fit in any cache. However, this need not be a barrier to high performance. With some data structure knowledge it is typically possible to bring data into the fast on-chip memory caches early, so that it is already available by the time it needs to be accessed. This thesis makes three contributions. I first contribute an automated software prefetching compiler technique to insert high-performance prefetches into program code to bring data into the cache early, achieving 1.3x geometric mean speedup on the most complex processors, and 2.7x on the simplest. I also provide an analysis of when and why this is likely to be successful, which data structures to target, and how to schedule software prefetches well. Then I introduce a hardware solution, the configurable graph prefetcher. This uses the example of breadth-first search on graph workloads to motivate how a hardware prefetcher armed with data-structure knowledge can avoid the instruction overheads, inflexibility and limited latency tolerance of software prefetching. The configurable graph prefetcher sits at the L1 cache and observes memory accesses, which can be configured by a programmer to be aware of a limited number of different data access patterns, achieving 2.3x geometric mean speedup on graph workloads on an out-of-order core. My final contribution extends the hardware used for the configurable graph prefetcher to make an event-triggered programmable prefetcher, using a set of a set of very small micro-controller-sized programmable prefetch units (PPUs) to cover a wide set of workloads. I do this by developing a highly parallel programming model that can be used to issue prefetches, thus allowing high-throughput prefetching with low power and area overheads of only around 3%, and a 3x geometric mean speedup for a variety of memory-bound applications. To facilitate its use, I then develop compiler techniques to help automate the process of targeting the programmable prefetcher. These provide a variety of tradeoffs from easiest to use to best performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Wentong Kavi Krishna M. "High performance architecture using speculative threads and dynamic memory management hardware." [Denton, Tex.] : University of North Texas, 2007. http://digital.library.unt.edu/permalink/meta-dc-5150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Computer Memory Architecture"

1

Gössel, Michael. Memory architecture & parallel access. Amsterdam: Elsevier, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grun, Peter. Memory architecture exploration for programmable embedded systems. Boston, MA: Kluwer Academic Publishers, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Grun, Peter. Memory architecture exploration for programmable embedded systems. Boston: Kluwer Academic Publishers, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tick, Evan. Memory performance of prolog architectures. Boston: Kluwer Academic Publishers, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Grun, Peter. Memory architecture exploration for programmable embedded systems. Boston: Kluwer Academic Publishers, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

R, Van Rosendale John, and Institute for Computer Applications in Science and Engineering., eds. Programming distributed memory architectures using KALI. Hampton, VA: Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Eva, Kühn, ed. Virtual shared memory for distributed architectures. Huntington, N.Y: Nova Science Publishers, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Memory systems and pipelined processors. Sudbury, Mass: Jones and Bartlett, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Memory storage patterns in parallel processing. Boston: Kluwer Academic, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Standley, Hilda M. Computer architecture evaluation for structural dynamics computations: Final technial report, project summary. Toledo, Ohio: Dept. of Computer Science, University of Toledo, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Computer Memory Architecture"

1

Blanchet, Gérard, and Bertrand Dupouy. "Memory." In Computer Architecture, 139–56. Hoboken, NJ USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118577431.ch7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chalk, B. S. "Computer Memory." In Computer Organisation and Architecture, 83–117. London: Macmillan Education UK, 1996. http://dx.doi.org/10.1007/978-1-349-13871-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Blanchet, Gérard, and Bertrand Dupouy. "Virtual Memory." In Computer Architecture, 175–204. Hoboken, NJ USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118577431.ch9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mueller, Silvia Melitta, and Wolfgang J. Paul. "Memory System Design." In Computer Architecture, 239–316. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-662-04267-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Burrell, Mark. "Memory." In Fundamentals of Computer Architecture, 109–28. London: Macmillan Education UK, 2004. http://dx.doi.org/10.1007/978-1-137-11313-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Shuangbao Paul. "Computer Memory and Storage." In Computer Architecture and Organization, 45–69. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5662-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jiang, Hongwu, Shanshi Huang, and Shimeng Yu. "Compute-in-Memory Architecture." In Handbook of Computer Architecture, 1–40. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-15-6401-7_62-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chalk, B. S., A. T. Carter, and R. W. Hind. "Primary memory." In Computer Organisation and Architecture, 89–108. London: Macmillan Education UK, 2004. http://dx.doi.org/10.1007/978-0-230-00060-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chalk, B. S., A. T. Carter, and R. W. Hind. "Secondary memory." In Computer Organisation and Architecture, 109–22. London: Macmillan Education UK, 2004. http://dx.doi.org/10.1007/978-0-230-00060-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Martha A., and Stephen A. Edwards. "Computation vs. Memory Systems: Pinning Down Accelerator Bottlenecks." In Computer Architecture, 86–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24322-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computer Memory Architecture"

1

Waterson, Clare, and B. Keith Jenkins. "Shared Memory Optical/Electronic Computer: Architecture Design." In Optical Computing. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/optcomp.1991.tua3.

Full text
Abstract:
Several abstract models of parallel computation have been developed and studied by the computer science and parallel processing communities [1, 2]. The shared memory models are among the most computationally powerful of these models. They benefit from substantial theoretical foundations, and many algorithms have been mapped onto these models in order to characterize theoretically optimum parallel performance. A number of attempts have been made to develop electronic parallel architectures based on the shared memory model. Most of them have been unsuccessful, primarily due to the complexity of the interconnection network hardware and its associated control.
APA, Harvard, Vancouver, ISO, and other styles
2

Pelley, Steven, Peter M. Chen, and Thomas F. Wenisch. "Memory persistency." In 2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA). IEEE, 2014. http://dx.doi.org/10.1109/isca.2014.6853222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Young, Vinson, Sanjay Kariyappa, and Moinuddin K. Qureshi. "Enabling Transparent Memory-Compression for Commodity Memory Systems." In 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2019. http://dx.doi.org/10.1109/hpca.2019.00010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

McCorkle, Eric. "Programmable bus/memory controllers in modern computer architecture." In the 43rd annual southeast regional conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1167350.1167408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sargsyan, David. "ISO 26262 compliant memory BIST architecture." In 2017 Computer Science and Information Technologies (CSIT). IEEE, 2017. http://dx.doi.org/10.1109/csitechnol.2017.8312145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shriraman, Arrvindh, Sandhya Dwarkadas, and Michael L. Scott. "Flexible Decoupled Transactional Memory Support." In 2008 35th International Symposium on Computer Architecture (ISCA). IEEE, 2008. http://dx.doi.org/10.1109/isca.2008.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Baugh, Lee, Naveen Neelakantam, and Craig Zilles. "Using Hardware Memory Protection to Build a High-Performance, Strongly-Atomic Hybrid Transactional Memory." In 2008 35th International Symposium on Computer Architecture (ISCA). IEEE, 2008. http://dx.doi.org/10.1109/isca.2008.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Gyu-hyeon, Dongmoon Min, Ilkwon Byun, and Jangwoo Kim. "Cryogenic computer architecture modeling with memory-side case studies." In ISCA '19: The 46th Annual International Symposium on Computer Architecture. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3307650.3322219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sullivan, Michael B., Mohamed Tarek Ibn Ziad, Aamer Jaleel, and Stephen W. Keckler. "Implicit Memory Tagging: No-Overhead Memory Safety Using Alias-Free Tagged ECC." In ISCA '23: 50th Annual International Symposium on Computer Architecture. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3579371.3589102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xue, Dongliang, Chao Li, Linpeng Huang, Chentao Wu, and Tianyou Li. "Adaptive Memory Fusion: Towards Transparent, Agile Integration of Persistent Memory." In 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2018. http://dx.doi.org/10.1109/hpca.2018.00036.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Computer Memory Architecture"

1

Cheriton, David R., Hendrik A. Goosen, and Patrick D. Boyle. ParaDiGM: A Highly Scalable Shared-Memory Multi-Computer Architecture. Fort Belvoir, VA: Defense Technical Information Center, November 1990. http://dx.doi.org/10.21236/ada325912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography