To see the other types of publications on this topic, follow the link: Sparse distributed memory.

Journal articles on the topic 'Sparse distributed memory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sparse distributed memory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Murdock, Bennet. "Sparse distributed memory." Acta Psychologica 76, no. 1 (1991): 92–94. http://dx.doi.org/10.1016/0001-6918(91)90056-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kang, Mingu, and Naresh R. Shanbhag. "In-Memory Computing Architectures for Sparse Distributed Memory." IEEE Transactions on Biomedical Circuits and Systems 10, no. 4 (2016): 855–63. http://dx.doi.org/10.1109/tbcas.2016.2545402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Snaider, Javier, Stan Franklin, Steve Strain, and E. Olusegun George. "Integer sparse distributed memory: Analysis and results." Neural Networks 46 (October 2013): 144–53. http://dx.doi.org/10.1016/j.neunet.2013.05.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hong, Y. S., and S. S. Chen. "Character recognition in a sparse distributed memory." IEEE Transactions on Systems, Man, and Cybernetics 21, no. 3 (1991): 674–78. http://dx.doi.org/10.1109/21.97459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Anwar, Ashraf, and Stan Franklin. "Sparse distributed memory for ‘conscious’ software agents." Cognitive Systems Research 4, no. 4 (2003): 339–54. http://dx.doi.org/10.1016/s1389-0417(03)00015-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, J., R. Das, J. Saltz, H. Berryman, and S. Hiranandan. "Distributed memory compiler design for sparse problems." IEEE Transactions on Computers 44, no. 6 (1995): 737–53. http://dx.doi.org/10.1109/12.391186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Snaider, Javier, and Stan Franklin. "Extended Sparse Distributed Memory and Sequence Storage." Cognitive Computation 4, no. 2 (2012): 172–80. http://dx.doi.org/10.1007/s12559-012-9125-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Furber, Steve B., W. John Bainbridge, J. Mike Cumpstey, and Steve Temple. "Sparse distributed memory using N-of-M codes." Neural Networks 17, no. 10 (2004): 1437–51. http://dx.doi.org/10.1016/j.neunet.2004.07.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Silva, Marcus Tadeu Pinheiro, Antônio Pádua Braga, and Wilian Soares Lacerda. "Reconfigurable co-processor for Kanerva's sparse distributed memory." Microprocessors and Microsystems 28, no. 3 (2004): 127–34. http://dx.doi.org/10.1016/j.micpro.2004.01.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hely, T. A., D. J. Willshaw, and G. M. Hayes. "A new approach to Kanerva's sparse distributed memory." IEEE Transactions on Neural Networks 8, no. 3 (1997): 791–94. http://dx.doi.org/10.1109/72.572115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Furber, S. B., G. Brown, J. Bose, J. M. Cumpstey, P. Marshall, and J. L. Shapiro. "Sparse Distributed Memory Using Rank-Order Neural Codes." IEEE Transactions on Neural Networks 18, no. 3 (2007): 648–59. http://dx.doi.org/10.1109/tnn.2006.890804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Guarracino, Mario R., Francesca Perla, and Paolo Zanetti. "A sparse nonsymmetric eigensolver for distributed memory architectures." International Journal of Parallel, Emergent and Distributed Systems 23, no. 3 (2008): 259–70. http://dx.doi.org/10.1080/17445760701640324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sun, Chunguang. "Parallel Sparse Orthogonal Factorization on Distributed-Memory Multiprocessors." SIAM Journal on Scientific Computing 17, no. 3 (1996): 666–85. http://dx.doi.org/10.1137/s1064827593260449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Dey, Sumon, Lee Baker, Joshua Schabel, Weifu Li, and Paul D. Franzon. "A Scalable Cluster-based Hierarchical Hardware Accelerator for a Cortically Inspired Algorithm." ACM Journal on Emerging Technologies in Computing Systems 17, no. 4 (2021): 1–29. http://dx.doi.org/10.1145/3447777.

Full text
Abstract:
This article describes a scalable, configurable and cluster-based hierarchical hardware accelerator through custom hardware architecture for Sparsey, a cortical learning algorithm. Sparsey is inspired by the operation of the human cortex and uses a Sparse Distributed Representation to enable unsupervised learning and inference in the same algorithm. A distributed on-chip memory organization is designed and implemented in custom hardware to improve memory bandwidth and accelerate the memory read/write operations for synaptic weight matrices. Bit-level data are processed from distributed on-chip
APA, Harvard, Vancouver, ISO, and other styles
15

Hsu, Ching-Hsien. "Sparse Matrix Block-Cyclic Realignment on Distributed Memory Machines." Journal of Supercomputing 33, no. 3 (2005): 175–96. http://dx.doi.org/10.1007/s11227-005-0247-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Chao, Padma Raghavan, Lloyd Arrowood, Donald W. Noid, Bobby G. Sumpter, and Robert E. Tuzun. "Large-Scale Normal Coordinate Analysis on Distributed Memory Parallel Systems." International Journal of High Performance Computing Applications 16, no. 4 (2002): 409–24. http://dx.doi.org/10.1177/109434200201600404.

Full text
Abstract:
Summary A parallel computational scheme for analyzing large-scale molecular vibration on distributed memory computing platforms is presented in this paper. This method combines the implicitly restarted Lanczos algorithm with a state-of-art parallel sparse direct solver to compute a set of low frequency vibrational modes for molecular systems containing tens of thousands of atoms. Although the original motivation for developing such a scheme was to overcome memory limitations on traditional sequential and shared memory machines, our computational experiments show that with a careful parallel de
APA, Harvard, Vancouver, ISO, and other styles
17

Maros, I., and G. Mitra. "Investigating the sparse simplex algorithm on a distributed memory multiprocessor." Parallel Computing 26, no. 1 (2000): 151–70. http://dx.doi.org/10.1016/s0167-8191(99)00100-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Chao, Hadi Pouransari, Sivasankaran Rajamanickam, Erik G. Boman, and Eric Darve. "A distributed-memory hierarchical solver for general sparse linear systems." Parallel Computing 74 (May 2018): 49–64. http://dx.doi.org/10.1016/j.parco.2017.12.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Gupta, Anshul. "A Shared- and distributed-memory parallel general sparse direct solver." Applicable Algebra in Engineering, Communication and Computing 18, no. 3 (2007): 263–77. http://dx.doi.org/10.1007/s00200-007-0037-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Rothberg, Edward. "Alternatives for solving sparse triangular systems on distributed-memory multiprocessors." Parallel Computing 21, no. 7 (1995): 1121–36. http://dx.doi.org/10.1016/0167-8191(95)00003-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lin, Chun-Yuan, and Yeh-Ching Chung. "Data distribution schemes of sparse arrays on distributed memory multicomputers." Journal of Supercomputing 41, no. 1 (2007): 63–87. http://dx.doi.org/10.1007/s11227-007-0104-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wixted, John T., Stephen D. Goldinger, Larry R. Squire, et al. "Coding of episodic memory in the human hippocampus." Proceedings of the National Academy of Sciences 115, no. 5 (2018): 1093–98. http://dx.doi.org/10.1073/pnas.1716443115.

Full text
Abstract:
Neurocomputational models have long posited that episodic memories in the human hippocampus are represented by sparse, stimulus-specific neural codes. A concomitant proposal is that when sparse-distributed neural assemblies become active, they suppress the activity of competing neurons (neural sharpening). We investigated episodic memory coding in the hippocampus and amygdala by measuring single-neuron responses from 20 epilepsy patients (12 female) undergoing intracranial monitoring while they completed a continuous recognition memory task. In the left hippocampus, the distribution of single-
APA, Harvard, Vancouver, ISO, and other styles
23

Mendes, Mateus, A. Paulo Coimbra, and Manuel M. Crisóstomo. "Robot navigation based on view sequences stored in a sparse distributed memory." Robotica 30, no. 4 (2011): 571–81. http://dx.doi.org/10.1017/s0263574711000828.

Full text
Abstract:
SUMMARYRobot navigation is a large area of research, where many different approaches have already been tried, including navigation based on visual memories. The Sparse Distributed Memory (SDM) is a kind of associative memory based on the properties of high-dimensional binary spaces. It exhibits characteristics, such as tolerance to noise and incomplete data, ability to work with sequences and the possibility of one-shot learning. Those characteristics make it appealing to use for robot navigation. The approach followed here was to navigate a robot using sequences of visual memories stored into
APA, Harvard, Vancouver, ISO, and other styles
24

ROGERS, DAVID. "KANERVA’S SPARSE DISTRIBUTED MEMORY: AN ASSOCIATIVE MEMORY ALGORITHM WELL-SUITED TO THE CONNECTION MACHINE." International Journal of High Speed Computing 01, no. 02 (1989): 349–65. http://dx.doi.org/10.1142/s0129053389000196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Scott, E. A., C. R. Fuller, W. F. O'Brien, and R. H. Cabell. "Sparse distributed associative memory for the identification of aerospace acoustic sources." AIAA Journal 31, no. 9 (1993): 1583–89. http://dx.doi.org/10.2514/3.11818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Linhares, Alexandre, Daniel M. Chada, and Christian N. Aranha. "The Emergence of Miller's Magic Number on a Sparse Distributed Memory." PLoS ONE 6, no. 1 (2011): e15592. http://dx.doi.org/10.1371/journal.pone.0015592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Xin, Zixing, Jianlin Xia, Maarten V. de Hoop, Stephen Cauley, and Venkataramanan Balakrishnan. "A Distributed-Memory Randomized Structured Multifrontal Method for Sparse Direct Solutions." SIAM Journal on Scientific Computing 39, no. 4 (2017): C292—C318. http://dx.doi.org/10.1137/16m1079221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Cong Fu, Xiangmin Jiao, and Tao Yang. "Efficient sparse LU factorization with partial pivoting on distributed memory architectures." IEEE Transactions on Parallel and Distributed Systems 9, no. 2 (1998): 109–25. http://dx.doi.org/10.1109/71.663864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Novotarskyi, M. A., S. G. Stirenko, Y. G. Gordienko, and V. A. Kuzmych. "DEEP REINFORCEMENT LEARNING WITH SPARSE DISTRIBUTED MEMORY FOR “WATER WORLD” PROBLEM SOLVING." Radio Electronics, Computer Science, Control 1, no. 1 (2021): 136–43. http://dx.doi.org/10.15588/1607-3274-2021-1-14.

Full text
Abstract:
Context. Machine learning is one of the actively developing areas of data processing. Reinforcement learning is a class of machine learning methods where the problem involves mapping the sequence of environmental states to agent’s actions. Significant progress in this area has been achieved using DQN-algorithms, which became one of the first classes of stable algorithms for learning using deep neural networks. The main disadvantage of this approach is the rapid growth of RAM in real-world tasks. The approach proposed in this paper can partially solve this problem.
 Objective. The aim is t
APA, Harvard, Vancouver, ISO, and other styles
30

Gravvanis, George A., and Konstantinos M. Giannoutakis. "Parallel Preconditioned Conjugate Gradient Square Method Based on Normalized Approximate Inverses." Scientific Programming 13, no. 2 (2005): 79–91. http://dx.doi.org/10.1155/2005/508607.

Full text
Abstract:
A new class of normalized explicit approximate inverse matrix techniques, based on normalized approximate factorization procedures, for solving sparse linear systems resulting from the finite difference discretization of partial differential equations in three space variables are introduced. A new parallel normalized explicit preconditioned conjugate gradient square method in conjunction with normalized approximate inverse matrix techniques for solving efficiently sparse linear systems on distributed memory systems, using Message Passing Interface (MPI) communication library, is also presented
APA, Harvard, Vancouver, ISO, and other styles
31

Borisyuk, Roman, Mike Denham, Frank Hoppensteadt, Yakov Kazanovich, and Olga Vinogradova. "An oscillatory neural network model of sparse distributed memory and novelty detection." Biosystems 58, no. 1-3 (2000): 265–72. http://dx.doi.org/10.1016/s0303-2647(00)00131-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Sun, Chunguang. "Parallel solution of sparse linear least squares problems on distributed-memory multiprocessors." Parallel Computing 23, no. 13 (1997): 2075–93. http://dx.doi.org/10.1016/s0167-8191(97)00064-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Amestoy, Patrick R., Iain S. Duff, Jean-Yves L'excellent, and Xiaoye S. Li. "Analysis and comparison of two general sparse solvers for distributed memory computers." ACM Transactions on Mathematical Software 27, no. 4 (2001): 388–421. http://dx.doi.org/10.1145/504210.504212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Shu, W. "Parallel Implementation of a Sparse Simplex Algorithm on MIMD Distributed Memory Computers." Journal of Parallel and Distributed Computing 31, no. 1 (1995): 25–40. http://dx.doi.org/10.1006/jpdc.1995.1142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Basermann, A. "Conjugate Gradient and Lanczos Methods for Sparse Matrices on Distributed Memory Multiprocessors." Journal of Parallel and Distributed Computing 45, no. 1 (1997): 46–52. http://dx.doi.org/10.1006/jpdc.1997.1364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Manevitz, Larry M., and Yigal Zemach. "Assigning meaning to data: Using sparse distributed memory for multilevel cognitive tasks." Neurocomputing 14, no. 1 (1997): 15–39. http://dx.doi.org/10.1016/0925-2312(95)00130-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Hämäläinen, Timo, Harri Klapuri, Jukka Saarinen, and Kimmo Kaski. "Parallel realizations of Kanerva's sparse distributed memory on a tree-shaped computer." Concurrency: Practice and Experience 9, no. 9 (1997): 877–96. http://dx.doi.org/10.1002/(sici)1096-9128(199709)9:9<877::aid-cpe276>3.0.co;2-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Fan, Kuo-Chin, and Yuan-Kai Wang. "A genetic sparse distributed memory approach to the application of handwritten character recognition." Pattern Recognition 30, no. 12 (1997): 2015–22. http://dx.doi.org/10.1016/s0031-3203(97)00017-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wixted, John T., Larry R. Squire, Yoonhee Jang, et al. "Sparse and distributed coding of episodic memory in neurons of the human hippocampus." Proceedings of the National Academy of Sciences 111, no. 26 (2014): 9621–26. http://dx.doi.org/10.1073/pnas.1408365111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Chun-Yuan, Yeh-Ching Chung, and Jen-Shiuh Liu. "Efficient Data Distribution Schemes for EKMR-Based Sparse Arrays on Distributed Memory Multicomputers." Journal of Supercomputing 34, no. 3 (2005): 291–313. http://dx.doi.org/10.1007/s11227-005-0788-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kim, S. K., and A. T. Chrortopoulos. "An Efficient Parallel Algorithm for Extreme Eigenvalues of Sparse Nonsymmetric Matrices." International Journal of Supercomputing Applications 6, no. 1 (1992): 98–111. http://dx.doi.org/10.1177/109434209200600106.

Full text
Abstract:
Main memory accesses for shared-memory systems or global communications (synchronizations) in message passing systems decrease the computation speed. In this paper, the standard Arnoldi algorithm for approximating a small number of eigenvalues, with largest (or smallest) real parts for nonsymmetric large sparse matrices, is restructured so that only one synchronization point is required; that is, one global communication in a message passing distributed-memory machine or one global memory sweep in a shared-memory machine per each iteration is required. We also introduce an s-step Arnoldi metho
APA, Harvard, Vancouver, ISO, and other styles
42

Asadi, Mohammadali, Alexander Brandt, Robert H. C. Moir, and Marc Moreno Maza. "Algorithms and Data Structures for Sparse Polynomial Arithmetic." Mathematics 7, no. 5 (2019): 441. http://dx.doi.org/10.3390/math7050441.

Full text
Abstract:
We provide a comprehensive presentation of algorithms, data structures, and implementation techniques for high-performance sparse multivariate polynomial arithmetic over the integers and rational numbers as implemented in the freely available Basic Polynomial Algebra Subprograms (BPAS) library. We report on an algorithm for sparse pseudo-division, based on the algorithms for division with remainder, multiplication, and addition, which are also examined herein. The pseudo-division and division with remainder operations are extended to multi-divisor pseudo-division and normal form algorithms, re
APA, Harvard, Vancouver, ISO, and other styles
43

Yang, Laurence Tianruo. "THE IMPROVED PARALLEL ICGS METHOD FOR LARGE AND SPARSE UNSYMMETRIC LINEAR SYSTEMS." Parallel Processing Letters 15, no. 04 (2005): 459–67. http://dx.doi.org/10.1142/s0129626405002374.

Full text
Abstract:
For the solutions of large and sparse linear systems of equations with unsymmetric coefficient matrices, we propose an improved version of the Conjugate Gradient Squared method (ICGS) method. The algorithm is derived such that all inner products, matrix-vector multiplications and vector updates of a single iteration step are independent and communication time required for inner product can be overlapped efficiently with computation time of vector updates. Therefore, the cost of global communication on parallel distributed memory computers can be significantly reduced. The resulting ICGS algori
APA, Harvard, Vancouver, ISO, and other styles
44

YANG, TIANRUO, and HAI XIANG LIN. "SOLVING SPARSE LEAST SQUARES PROBLEMS WITH PRECONDITIONED CGLS METHOD ON PARALLEL DISTRIBUTED MEMORY COMPUTERS." Parallel Algorithms and Applications 13, no. 4 (1999): 289–305. http://dx.doi.org/10.1080/01495739908947371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Bošanský, Michal, and Bořek Patzák. "PARALLEL APPROACH TO SOLVE OF THE DIRECT SOLUTION OF LARGE SPARSE SYSTEMS OF LINEAR EQUATIONS." Acta Polytechnica CTU Proceedings 13 (November 13, 2017): 16. http://dx.doi.org/10.14311/app.2017.13.0016.

Full text
Abstract:
The paper deals with parallel approach for the numerical solution of large, sparse, non-symmetric systems of linear equations, that can be part of any finite element software. In this contribution, the differences between the sequential and parallel solution are highlighted and the approach to efficiently interface with distributed memory version of SuperLU solver is described.
APA, Harvard, Vancouver, ISO, and other styles
46

Duato, J. "Parallel triangularization of a sparse matrix on a distributed-memory multiprocessor using fast givens rotations." Linear Algebra and its Applications 121 (August 1989): 582–92. http://dx.doi.org/10.1016/s0024-3795(16)30301-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Mu, Mo, and John R. Rice. "An organization of sparse gauss elimination for solving partial differntial equations on distributed memory machines." Numerical Methods for Partial Differential Equations 9, no. 2 (1993): 175–89. http://dx.doi.org/10.1002/num.1690090206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Emruli, Blerim, and Fredrik Sandin. "Analogical Mapping with Sparse Distributed Memory: A Simple Model that Learns to Generalize from Examples." Cognitive Computation 6, no. 1 (2013): 74–88. http://dx.doi.org/10.1007/s12559-013-9206-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Josselyn, Sheena A., and Paul W. Frankland. "Memory Allocation: Mechanisms and Function." Annual Review of Neuroscience 41, no. 1 (2018): 389–413. http://dx.doi.org/10.1146/annurev-neuro-080317-061956.

Full text
Abstract:
Memories for events are thought to be represented in sparse, distributed neuronal ensembles (or engrams). In this article, we review how neurons are chosen to become part of a particular engram, via a process of neuronal allocation. Experiments in rodents indicate that eligible neurons compete for allocation to a given engram, with more excitable neurons winning this competition. Moreover, fluctuations in neuronal excitability determine how engrams interact, promoting either memory integration (via coallocation to overlapping engrams) or separation (via disallocation to nonoverlapping engrams)
APA, Harvard, Vancouver, ISO, and other styles
50

Smith, Barry F., and William D. Gropp. "The Design of Data-Structure-Neutral Libraries for the Iterative Solution of Sparse Linear Systems." Scientific Programming 5, no. 4 (1996): 329–36. http://dx.doi.org/10.1155/1996/417629.

Full text
Abstract:
Over the past few years several proposals have been made for the standardization of sparse matrix storage formats in order to allow for the development of portable matrix libraries for the iterative solution of linear systems. We believe that this is the wrong approach. Rather than define one standard (or a small number of standards) for matrix storage, the community should define an interface (i.e., the calling sequences) for the functions that act on the data. In addition, we cannot ignore the interface to the vector operations because, in many applications, vectors may not be stored as cons
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!