To see the other types of publications on this topic, follow the link: Memory block.

Journal articles on the topic 'Memory block'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Memory block.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chae, Suk-Joo, Ronnie Mativenga, Joon-Young Paik, Muhammad Attique, and Tae-Sun Chung. "DSFTL: An Efficient FTL for Flash Memory Based Storage Systems." Electronics 9, no. 1 (January 12, 2020): 145. http://dx.doi.org/10.3390/electronics9010145.

Full text
Abstract:
Flash memory is widely used in solid state drives (SSD), smartphones and so on because of their non-volatility, low power consumption, rapid access speed, and resistance to shocks. Due to the hardware features of flash memory that differ from hard disk drives (HDD), a software called FTL (Flash Translation Layer) was presented. The function of FTL is to make flash memory device appear as a block device to its host. However, due to the erase before write features of flash memory, flash blocks need to be constantly availed through the garbage collection (GC) of invalid pages, which incurs high-priced overhead. In the previous hybrid mapping schemes, there are three problems that cause GC overhead. First, operation of partial merge causes more page copies than operation of switch merge. However, many authors just concentrate on reducing operation of full merge. Second, the availability between a data block and a log block makes the space availability of the log block lower, and it also generates a very high-priced operation of full merge. Third, the space availability of the data block is low because the data block, which has many free pages, is merged. Therefore, we propose a new FTL named DSFTL (Dynamic Setting for FTL). In this FTL, we use many SW (sequential write) log blocks to increase operation of switch merge and to decrease operation of partial merge. In addition, DSFTL dynamically handles the data blocks and log blocks to reduce the operations of erase and the high-priced operation of full merge. Additionally, our scheme prevents the data block with many free pages from being merged to increase the space availability of the data block. Our extensive experimental results prove that our proposed approach (DSFTL) reduces the count of erase and increases the operation of switch merge. As a result, DSFTL decreases the garbage collection overhead.
APA, Harvard, Vancouver, ISO, and other styles
2

Prihozhy, A. A., and O. N. Karasik. "HETEROGENIOUS BLOCKED ALL-PAIRS SHORTEST PATHS ALGORITHM." «System analysis and applied information science», no. 3 (November 2, 2017): 68–75. http://dx.doi.org/10.21122/2309-4923-2017-3-68-75.

Full text
Abstract:
The problem of finding the shortest paths between all pairs of vertices in a weighted directed graph is considered. The algorithms of Dijkstra and Floyd-Warshall, homogeneous block and parallel algorithms and other algorithms of solving this problem are known. A new heterogeneous block algorithm is proposed which considers various types of blocks and takes into account the shared hierarchical memory organization and multi-core processors for calculating each type of block. The proposed heterogeneous block computing algorithms are compared with the generally accepted homogeneous universal block calculation algorithm at theoretical and experimental levels. The main emphasis is on using the nature of the heterogeneity, the interaction of blocks during computation and the variation in block size, the size of the block matrix and the total number of blocks in order to identify the possibility of reducing the amount of computation performed during the calculation of the block, reducing the activity of the processor’s cache memory and determining the influence of the calculation time of each block type on the total execution time of the heterogeneous block algorithm. A recurrent resynchronized algorithm for calculating the diagonal block (D0) is proposed, which improves the use of the processor’s cache and reduces the number of iterations up to 3 times that are necessary to calculate the diagonal block, which implies the acceleration in calculating the diagonal block up to 60%. For more efficient work with the cache memory, variants of permutation of the basic loops k-i-j in the algorithms of calculating the blocks of the cross (C1 and C2) and the updated blocks (U3) are proposed. These permutations in combination with the proposed algorithm for calculating the diagonal block reduce the total runtime of the heterogeneous block algorithm to 13% on average against the homogeneous block algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Myungsub. "A Block Classification Method with Monitor and Restriction in NAND Flash memory." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (April 11, 2021): 209–15. http://dx.doi.org/10.17762/turcomat.v12i5.877.

Full text
Abstract:
In this paper, we propose a block classification with monitor and restriction (BCMR) method to isolate and reduce the interference of blocks in garbage collection and wear leveling. The proposed method monitors the endurance variation of blocks during garbage collection and detects hot blocks by making a restriction condition based on this information. This method induces block classification by its update frequency for garbage collection and wear leveling, resulting in a prolonged lifespan for NAND flash memory systems. The performance evaluation results show that the BCMR method prolonged the life of NAND flash memory systems by 3.95% and reduced the standard deviation per block by 7.4%, on average.
APA, Harvard, Vancouver, ISO, and other styles
4

Перминов, Н. С., Д. Ю. Таранкова, and С. А. Моисеев. "Спектрально улучшенная квантовая память на контролируемой частотной гребенке." Журнал технической физики 127, no. 8 (2019): 313. http://dx.doi.org/10.21883/os.2019.08.48048.202-18.

Full text
Abstract:
We propose a scheme of a universal block of broadband quantum memory consisting of three ring microresonators forming a controllable frequency comb and interacting with each other and with a common waveguide. We find the optimal parameters of the microresonators showing the possibility of highly efficient storage of light fields on this memory block and we demonstrate the procedure for gluing several memory blocks for increasing spectral range of the composite quantum memory while maintaining high efficiency.
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Meng-Fan, Mary Jane Irwin, and Robert Michael Owens. "Power-Area Trade-Offs in Divided Word Line Memory Arrays." Journal of Circuits, Systems and Computers 07, no. 01 (February 1997): 49–67. http://dx.doi.org/10.1142/s021812669700005x.

Full text
Abstract:
Since on-chip caches account for a significant portion of the power budget of modern microprocessors, low power caches are needed in microprocessors destined for portable electronic applications. A significant portion of the power consumption of caches comes from accessing the cache memory array and most of the power consumption of the memory array comes from driving the bit line pairs (i.e., the column current). Various memory array architectures have been proposed to improve the word line delay and the column current. For example, in a divided word line memory array memory cells in each row are organized into blocks. Only the memory cells which are in the activated block have their bit line pairs driven, thus both improving the speed (by decreasing the word line delay) and lowering the power consumption (by decreasing the column current). In this paper we analyze the power-area tradeoffs of divided word line memories with different size blocks. We compare the area and power consumption of 16 Kbit and 64 Kbit memory arrays with 2, 4, 8, and 16 memory cells per block. Our experiments show that a divided word line memory array can lower the power consumption by 50% to 90% over a nondivided word line memory array. However, they consume more area; the area of a divided word line memory array can be 15% to 27% larger than the area of a comparable nondivided word line array. Our experiments also showed that divided word line memory arrays with two or four memory cells in a block have better power-area product than those with more than four cells per block.
APA, Harvard, Vancouver, ISO, and other styles
6

SEO, EUISEONG, SEUNGRYOUL MAENG, DONGHYOUK LIM, and JOONWON LEE. "EXPLOITING TEMPORAL LOCALITY FOR ENERGY EFFICIENT MEMORY MANAGEMENT." Journal of Circuits, Systems and Computers 17, no. 05 (October 2008): 929–41. http://dx.doi.org/10.1142/s021812660800468x.

Full text
Abstract:
Memory is becoming one of the major power consumers in computing systems. Therefore, energy efficient memory management is essential. Modern memory systems employ sleep states for energy saving. To utilize this feature, existing research activities have concentrated on increasing spatial locality to deactivate as many blocks as possible. However, they did not count the unexpected activation of memory blocks due to cache eviction of deactivated tasks. In this paper, we suggest a software-based power state management scheme for memory, which exploits temporal locality to relieve the energy loss from the unexpected activation of memory blocks from cache eviction. The suggested scheme SW-NAP makes a memory block remain deactivated during a certain tick, which has no cache miss over the block. The evaluation shows that SW-NAP is 50% better than PAVM, which is an existing software scheme, and worse than PMU, which is another approach based on the specialized hardware by 20%. We also suggest task scheduling policies that increase the effectiveness of SW-NAP and they saved up to 7% additional energy.
APA, Harvard, Vancouver, ISO, and other styles
7

Jaja, J. F., and Kwan Woo Ryu. "The block distributed memory model." IEEE Transactions on Parallel and Distributed Systems 7, no. 8 (1996): 830–40. http://dx.doi.org/10.1109/71.532114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Yin, Wen Yi Li, and Kai Wang. "A Read-Write Optimization Scheme for Flash Memory Storage Systems." Applied Mechanics and Materials 687-691 (November 2014): 2096–99. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.2096.

Full text
Abstract:
In this paper, we propose a novel and efficient read-write optimization scheme for flash memory storage systems, we have named RWF: Read-Write FTL. In the proposed scheme, we effectively connect Logical Sector Number, Logical Block Number, Logical Page Number, Physical Page Number and Physical Block Number. RWF through uniting log blocks and physical blocks, all blocks can be used for servicing update requests. The invalid blocks could be reclaimed properly and intensively, it can avoid merging log blocks with physical blocks. At last, through the simulation test on RWF and the comparison with other schemes, which demonstrate the RWF can effectively solve data storage problems, and it greatly reduces erase count of flash devices and efficiency improves the performance of flash memory storage systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Edwards, Nicholas Jain, David Tonny Brain, Stephen Carinna Joly, and Mariana Karry Masucato. "Hadoop distributed file system mechanism for processing of large datasets across computers cluster using programming techniques." International research journal of management, IT and social sciences 6, no. 6 (September 7, 2019): 1–16. http://dx.doi.org/10.21744/irjmis.v6n6.739.

Full text
Abstract:
In this paper, we have proved that the HDFS I/O operations performance is getting increased by integrating the set associativity in the cache design and changing the pipeline topology using fully connected digraph network topology. In read operation, since there is huge number of locations (words) at cache compared to direct mapping the chances of miss ratio is very low, hence reducing the swapping of the data between main memory and cache memory. This is increasing the memory I/O operations performance. In Write operation instead of using the sequential pipeline we need to construct the fully connected graph using the data blocks listed from the NameNode metadata. In sequential pipeline, the data is getting copied to source node in the pipeline. Source node will copy the data to next data block in the pipeline. The same copy process will continue until the last data block in the pipeline. The acknowledgment process has to follow the same process from last block to source block. The time required to transfer the data to all the data blocks in the pipeline and the acknowledgment process is almost 2n times to data copy time from one data block to another data block (if the replication factor is n).
APA, Harvard, Vancouver, ISO, and other styles
10

Chung, Tae-Sun, Dong-Joo Park, and Jongik Kim. "An Efficient Flash Translation Layer for Large Block NAND Flash Devices." Journal of Circuits, Systems and Computers 24, no. 09 (August 27, 2015): 1550138. http://dx.doi.org/10.1142/s0218126615501388.

Full text
Abstract:
Recently, flash memory is widely used as a non-volatile storage for embedded applications such as smart phones, MP3 players, digital cameras and so on. The software layer called flash translation layer (FTL) becomes more important since it is a key factor in the overall flash memory system performance. Many researchers have proposed FTL algorithms for small block flash memory in which the size of a physical page of flash memory is equivalent to the size of a data sector of the file system. However, major flash vendors have now produced large block flash memory in which the size of a physical page is larger than the file system's data sector size. Since large block flash memory has new features, designing FTL algorithms specialized to large block flash memory is a challenging issue. In this paper, we provide an efficient FTL named LSTAFF* for large block flash memory. LSTAFF* is designed to achieve better performance by using characteristics of large block flash memory and to provide safety by abiding by restrictions of large block flash memory. Experimental results show that LSTAFF* outperforms existing algorithms on a large block flash memory.
APA, Harvard, Vancouver, ISO, and other styles
11

Maltsev, Oleg. "ABOUT INTUITION MECHANISMS IN THE CONTEXT OF HUMAN ACTIVITY." Educational Discourse: collection of scientific papers, no. 22(4) (May 14, 2020): 79–98. http://dx.doi.org/10.33930/ed.2019.5007.22(4)-7.

Full text
Abstract:
The human activity is the object of the conducted research, the mechanisms of intuition is its subject. Therefore, the purpose of this article is statements of a philosophical comprehension of intuition mechanisms functioning in human activity. The main ideas of the author experienced the corresponding approbation in scientific and field researches of 2015-2020. They are systemically stated and presented in this article for the first time, as well. The innovation of the following article consists of the conducted research issues that reflect the systematization of knowledge of the mechanisms of intuition which result in a philosophical comprehension of the principles of operation of human memory blocks. Researches of a dialectic contradiction rational and irrational, activity development sources hold concrete manifestations of a contradiction in an intuition and goal-setting connection, uniting of consciousness and memory. For this reason there comes a requirement to consider the main mechanisms of memory, namely: the prototipology memory block, archetypology memory block, and the ancestral unconscious block which integrity is defined by the memory model.
APA, Harvard, Vancouver, ISO, and other styles
12

Yang, Yin, Wen Yi Li, and Kai Wang. "A New FTL-Based Flash Memory Management Scheme for Flash-Based Storage Systems." Applied Mechanics and Materials 651-653 (September 2014): 1000–1003. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.1000.

Full text
Abstract:
In this paper, we propose a novel and efficient flash translation layer scheme called BLTF: Block Link-Table FTL. In this proposed scheme, all blocks can be used for servicing update requests, so updates operation can be performed on any of the physical blocks, through uniting log blocks and physical blocks, it can avoid uneven erasing and low block utilization. The invalid blocks, in BLTF scheme, could be reclaimed properly and intensively, it can avoid merging log blocks with physical blocks. At last, the BLTF is tested by simulation, which demonstrates the BLTF can effectively solve data storage problems. Through comparison with other algorithms, we can know that the proposed BLTF greatly prolongs service life of flash devices and improves efficiency of blocks erasing operation.
APA, Harvard, Vancouver, ISO, and other styles
13

Kramer, Gerhard. "Information Networks With In-Block Memory." IEEE Transactions on Information Theory 60, no. 4 (April 2014): 2105–20. http://dx.doi.org/10.1109/tit.2014.2303120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Shiba, Kazuyoshi, and Katsuhiko Kubota. "Block-erasing methods for flash memory." Electronics and Communications in Japan (Part II: Electronics) 77, no. 4 (April 1994): 106–13. http://dx.doi.org/10.1002/ecjb.4420770412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Danon, Asaf, and Mohammed Shurrab. "Alternating trifascicular block and cardiac memory." Journal of Electrocardiology 50, no. 6 (November 2017): 966–68. http://dx.doi.org/10.1016/j.jelectrocard.2017.07.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Seshagiri Rao, V. R., and M. Asha Rani. "Global Spare Blocks for Repair of Clustered Fault Cells in Embedded Memories." Journal of Computational and Theoretical Nanoscience 17, no. 4 (April 1, 2020): 1969–75. http://dx.doi.org/10.1166/jctn.2020.8475.

Full text
Abstract:
Spare full rows and columns are used in conventional memories to take care of any bad cells in memories. But this is not efficient as many chips do not become usable. Alternatively the spare redundant rows and redundant columns are bifurcated into blocks and block level repair can be attempted. A novel global feature is added to the spare blocks which enables the spare row (column) block to be used anywhere in the memory array. This memory hardware architecture can easily interface with embedded memory cores. A new algorithm, Essential Most Spare Pivoting (EMSP) is proposed which can be easily implemented in built-in configuration. The overhead area proposed in this paper is very less. The chip yield, reliability and repair rate are likely to improve significantly as per the simulation results.
APA, Harvard, Vancouver, ISO, and other styles
17

Grechanyy, Sergey, and K. Chubur. "METHODS FOR ENSURING RESISTANCE TO HCP FOR CONTROL LOGIC AND STATIC MEMORY OF THE MICROPROCESSOR IN THE DESIGN." Modeling of systems and processes 12, no. 4 (January 23, 2020): 17–24. http://dx.doi.org/10.12737/2219-0767-2020-12-4-17-24.

Full text
Abstract:
The article describes methods for ensuring resistance to heavy charged particles (HCP) of the RAM block of the microprocessor. A description of the implementation and a block diagram of static memory based on dummy blocks is given. The paper considers methods of combating the biopolar effect, which are aimed at controlling the potential of the transistor body and reducing the resistance. The dependence of the critical charge of a SOI-memory cell the gain of a parasitic biopolar transistor is modeled. To increase the fault tolerance of combinational circuits consisting of control logic and decoder blocks, redundancy is applied at the level of individual valves.
APA, Harvard, Vancouver, ISO, and other styles
18

SZKUTNIK, JACEK, and KRZYSZTOF KUŁAKOWSKI. "GENERALIZED SYNCHRONIZATION AND MEMORY EFFECT IN THE BURRIDGE–KNOPOFF SYSTEM OF THREE BLOCKS." International Journal of Modern Physics C 15, no. 05 (June 2004): 629–36. http://dx.doi.org/10.1142/s0129183104006091.

Full text
Abstract:
Recently, synchronization in the Burridge–Knopoff model has been found to depend on the initial conditions. Here we report the existence of three modes of oscillations of the system of three blocks. In one of the modes, two lateral blocks are synchronized. In the second mode, the central block moves with almost constant velocity, i.e., it does not stick. Two lateral blocks do stick and they move in opposite phases. In the third mode, the blocks oscillate with aperiodic amplitude. The lateral blocks move in opposite phases and their frequency is lower than the one for the central block. The mode selected by the system depends on the initial conditions. Numerical results indicate that there is no modes in the phase space.
APA, Harvard, Vancouver, ISO, and other styles
19

van Renen, Alexander, Lukas Vogel, Viktor Leis, Thomas Neumann, and Alfons Kemper. "Building blocks for persistent memory." VLDB Journal 29, no. 6 (September 23, 2020): 1223–41. http://dx.doi.org/10.1007/s00778-020-00622-9.

Full text
Abstract:
AbstractI/O latency and throughput are two of the major performance bottlenecks for disk-based database systems. Persistent memory (PMem) technologies, like Intel’s Optane DC persistent memory modules, promise to bridge the gap between NAND-based flash (SSD) and DRAM, and thus eliminate the I/O bottleneck. In this paper, we provide the first comprehensive performance evaluation of PMem on real hardware in terms of bandwidth and latency. Based on the results, we develop guidelines for efficient PMem usage and four optimized low-level building blocks for PMem applications: log writing, block flushing, in-place updates, and coroutines for write latency hiding.
APA, Harvard, Vancouver, ISO, and other styles
20

Langr, Daniel, and Ivan Šimeček. "Analysis of Memory Footprints of Sparse Matrices Partitioned Into Uniformly-Sized Blocks." Scalable Computing: Practice and Experience 19, no. 3 (September 14, 2018): 275–92. http://dx.doi.org/10.12694/scpe.v19i3.1358.

Full text
Abstract:
The presented study analyses memory footprints of 563 representative benchmark sparse matrices with respect to their partitioning into uniformly-sized blocks. Different block sizes and different ways of storing blocks in memory are considered and statistically evaluated. Memory footprints of partitioned matrices are then compared with their lower bounds and CSR, index-compressed CSR, and EBF storage formats. The results show that blocking-based storage formats may significantly reduce memory footprints of sparse matrices arising from a wide range of application domains. Additionally, measured consistency of results is presented and discussed, benefits of individual formats for storing blocks are evaluated, and an analysis of best-case and worst-case matrices is provided for in-depth understanding of causes of memory savings of blocking-based formats.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Guo Hua, and Jing Lin Sun. "BIST-Based Method for Diagnosing Multiple Faulty CLBs in FPGAs." Applied Mechanics and Materials 643 (September 2014): 243–48. http://dx.doi.org/10.4028/www.scientific.net/amm.643.243.

Full text
Abstract:
This paper presents a new built-in self-test (BIST) method to realize the fault detection and the fault diagnosis of configurable logic blocks (CLBs) in FPGAs. The proposed BIST adopts a circular comparison structure to overcome the phenomenon of fault masking in diagnosing multiple faulty CLBs and improve the diagnostic resolution. To test the memory block in every CLB, different TPG structures are proposed to obtain maximum stuck-at fault coverage. For the LUT mode of the memory block, the TPG based on the LFSR is designed to provide Pseudo-exhaustive testing patterns, and for the distributed RAM mode of the memory block, the TPG based on FSM is designed to provide March C-testing patterns. Besides, the comparator-based output response analyzer (ORA) and the cascaded ORA scan chain are used to locate the faulty CLB and propagate the comparison output in every row. Finally, fault-injection experiment results verify its ability to detect and diagnose multiple faulty CLBs in faulty FPGAs.
APA, Harvard, Vancouver, ISO, and other styles
22

Rajsuman, Rochit, and Kamal Rajkanan. "STD Architecture: A Practical Approach to Test M-Bits Random Access Memories." VLSI Design 1, no. 4 (January 1, 1994): 327–34. http://dx.doi.org/10.1155/1994/36218.

Full text
Abstract:
We present a design method (called STD architecture) to design large memories so that the test time does not increase with the increasing size of memory. Large memories can be constructed by using several small blocks of memory. The memory address decoder is divided into two or more levels and designed such that during the test mode all small memory blocks are accessed together. With the help of modified decoder, all small memory blocks are tested in parallel using any standard test algorithm. In this design, time to test the whole memory is equal to the time required to test one small block. The proposed design is highly structured and hardware overhead is negligible. The basic idea is to exploit internal hardware for testing purpose. With the proposed method a constant test time can be achieved irrespective of the memory size. STD architecture is applicable to memory chips as well as memory boards, and the design is suitable for fault detection as well as for fault diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
23

Chung, Weon-Il, and Liangbo Li. "Memory Compaction Scheme with Block-Level Buffer for Large Flash Memory." International Journal of Contents 6, no. 4 (December 28, 2010): 22–29. http://dx.doi.org/10.5392/ijoc.2010.6.4.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mao-Chao Lin, Jia-Yin Wang, and Shang-Chih Ma. "On block-coded modulation with interblock memory." IEEE Transactions on Communications 45, no. 11 (1997): 1401–11. http://dx.doi.org/10.1109/26.649757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lee, Jung-Hoon. "Index block mapping for flash memory system." Journal of the Korea Society of Computer and Information 15, no. 8 (August 31, 2010): 23–30. http://dx.doi.org/10.9708/jksci.2010.15.8.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Koh, Kwangwon, Kangho Kim, Seunghyub Jeon, and Jaehyuk Huh. "Disaggregated Cloud Memory with Elastic Block Management." IEEE Transactions on Computers 68, no. 1 (January 1, 2019): 39–52. http://dx.doi.org/10.1109/tc.2018.2851565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ito, E., M. Yamagishi, D. Hatakeyama, T. Watanabe, Y. Fujito, V. Dyakonova, and K. Lukowiak. "Memory block: a consequence of conflict resolution." Journal of Experimental Biology 218, no. 11 (April 16, 2015): 1699–704. http://dx.doi.org/10.1242/jeb.120329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

You-Sung Chang and Chong-Min Kyung. "Conforming block inversion for low power memory." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 10, no. 1 (February 2002): 15–19. http://dx.doi.org/10.1109/92.988726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Lopriore, L. "Stack Cache Memory for Block-Structured Programs." Computer Journal 37, no. 7 (July 1, 1994): 610–20. http://dx.doi.org/10.1093/comjnl/37.7.610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Porzelius, James. "Memory for Pain After Nerve-Block Injections." Clinical Journal of Pain 11, no. 2 (June 1995): 112–20. http://dx.doi.org/10.1097/00002508-199506000-00005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Cosme, Iria C. S., Isaac F. Fernandes, João L. de Carvalho, and Samuel Xavier-de-Souza. "Memory-usage advantageous block recursive matrix inverse." Applied Mathematics and Computation 328 (July 2018): 125–36. http://dx.doi.org/10.1016/j.amc.2018.01.051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yamazaki, Ichitaro, Akihiro Ida, Rio Yokota, and Jack Dongarra. "Distributed-memory lattice H-matrix factorization." International Journal of High Performance Computing Applications 33, no. 5 (August 2019): 1046–63. http://dx.doi.org/10.1177/1094342019861139.

Full text
Abstract:
We parallelize the LU factorization of a hierarchical low-rank matrix ([Formula: see text]-matrix) on a distributed-memory computer. This is much more difficult than the [Formula: see text]-matrix-vector multiplication due to the dataflow of the factorization, and it is much harder than the parallelization of a dense matrix factorization due to the irregular hierarchical block structure of the matrix. Block low-rank (BLR) format gets rid of the hierarchy and simplifies the parallelization, often increasing concurrency. However, this comes at a price of losing the near-linear complexity of the [Formula: see text]-matrix factorization. In this work, we propose to factorize the matrix using a “lattice [Formula: see text]-matrix” format that generalizes the BLR format by storing each of the blocks (both diagonals and off-diagonals) in the [Formula: see text]-matrix format. These blocks stored in the [Formula: see text]-matrix format are referred to as lattices. Thus, this lattice format aims to combine the parallel scalability of BLR factorization with the near-linear complexity of [Formula: see text]-matrix factorization. We first compare factorization performances using the [Formula: see text]-matrix, BLR, and lattice [Formula: see text]-matrix formats under various conditions on a shared-memory computer. Our performance results show that the lattice format has storage and computational complexities similar to those of the [Formula: see text]-matrix format, and hence a much lower cost of factorization than BLR. We then compare the BLR and lattice [Formula: see text]-matrix factorization on distributed-memory computers. Our performance results demonstrate that compared with BLR, the lattice format with the lower cost of factorization may lead to faster factorization on the distributed-memory computer.
APA, Harvard, Vancouver, ISO, and other styles
33

Kawarazaki, Noriyuki, Nobuto Kashiwagi, Ichiro Hoya, and Kazue Nishihara. "Manipulator Work System Using Gesture Instructions." Journal of Robotics and Mechatronics 14, no. 5 (October 20, 2002): 506–13. http://dx.doi.org/10.20965/jrm.2002.p0506.

Full text
Abstract:
This paper provides a cooperative manipulator work system using gesture instructions. In our system, hand gestures are recognized and the manipulator works based on them. We propose block division to detect and recognize hand gestures rapidly. The template of the hand is divided into 3 blocks: base, hand and finger. Template memory is reduced by block division. The effectiveness of our system is clarified by several experimental results.
APA, Harvard, Vancouver, ISO, and other styles
34

Freudenberger, Jürgen, Mohammed Rajab, Daniel Rohweder, and Malek Safieh. "A Codec Architecture for the Compression of Short Data Blocks." Journal of Circuits, Systems and Computers 27, no. 02 (September 11, 2017): 1850019. http://dx.doi.org/10.1142/s0218126618500196.

Full text
Abstract:
This work proposes a lossless data compression algorithm for short data blocks. The proposed compression scheme combines a modified move-to-front algorithm with Huffman coding. This algorithm is applicable in storage systems where the data compression is performed on block level with short block sizes, in particular, in non-volatile memories. For block sizes in the range of 1[Formula: see text]kB, it provides a compression gain comparable to the Lempel–Ziv–Welch algorithm. Moreover, encoder and decoder architectures are proposed that have low memory requirements and provide fast data encoding and decoding.
APA, Harvard, Vancouver, ISO, and other styles
35

Rugg, Michael D., Kevin Allan, and Claire S. Birch. "Electrophysiological Evidence for the Modulation of Retrieval Orientation by Depth of Study Processing." Journal of Cognitive Neuroscience 12, no. 4 (July 2000): 664–78. http://dx.doi.org/10.1162/089892900562291.

Full text
Abstract:
Event-related potentials (ERPs) were employed to investigate whether brain activity elicited by retrieval cues in a memory test varies according to the encoding task undertaken at study. Two recognition memory test blocks were administered, preceded, in one case, by a “shallow” study task (alphabetic judgement) and, in the other case, by a “deep” task (sentence generation). ERPs elicited by the new words in each test block differed, the ERPs elicited in the block following the shallow study task exhibiting the more positive-going waveforms. This finding was taken as evidence that subjects adopt different “retrieval sets” when attempting to retrieve items that had been encoded in terms of alphabetic versus semantic attributes. Differences between the ERPs elicited by correctly classified old and new words (old/new effects) also varied with encoding task. The effects for deeply studied words resembled those found in previous ERP studies of recognition memory, whereas old/new effects for shallowly studied words were confined to a late-onsetting, right frontal positivity. Together, the findings indicate that the depth of study processing influences two kinds of memory-related neural activity, associated with memory search operations, and the processing of retrieved information, respectively.
APA, Harvard, Vancouver, ISO, and other styles
36

Weinzierl, Tobias, Michael Bader, Kristof Unterweger, and Roland Wittmann. "Block Fusion on Dynamically Adaptive Spacetree Grids for Shallow Water Waves." Parallel Processing Letters 24, no. 03 (September 2014): 1441006. http://dx.doi.org/10.1142/s0129626414410060.

Full text
Abstract:
Spacetrees are a popular formalism to describe dynamically adaptive Cartesian grids. Even though they directly yield a mesh, it is often computationally reasonable to embed regular Cartesian blocks into their leaves. This promotes stencils working on homogeneous data chunks. The choice of a proper block size is sensitive. While large block sizes foster loop parallelism and vectorisation, they restrict the adaptivity's granularity and hence increase the memory footprint and lower the numerical accuracy per byte. In the present paper, we therefore use a multiscale spacetree-block coupling admitting blocks on all spacetree nodes. We propose to find sets of blocks on the finest scale throughout the simulation and to replace them by fused big blocks. Such a replacement strategy can pick up hardware characteristics, i.e. which block size yields the highest throughput, while the dynamic adaptivity of the fine grid mesh is not constrained—applications can work with fine granular blocks. We study the fusion with a state-of-the-art shallow water solver at hands of an Intel Sandy Bridge and a Xeon Phi processor where we anticipate their reaction to selected block optimisation and vectorisation.
APA, Harvard, Vancouver, ISO, and other styles
37

Fan, Quan Run, and Feng Pan. "Technology Mapping for Heterogeneous FPGA in Different EDA Stages." Applied Mechanics and Materials 229-231 (November 2012): 1866–69. http://dx.doi.org/10.4028/www.scientific.net/amm.229-231.1866.

Full text
Abstract:
In traditional EDA flow, Technology mapping is performed after logic synthesis. Besides programmable logic blocks, heterogeneous FPGAs also have some hard blocks, such as memory block and multiplier, built in it. After logic synthesis, it will be difficult for technology mapping to find sub-circuits that can be implemented in hard blocks. In this paper, a systematic technology mapping approach is proposed. In the design phase, with the support of CAD tools, a module based design approach is used to map some design block to large hard blocks. During register transfer level synthesis, some functions that are suitable to be implemented in small hard blocks are identified. Other logic functions are mapped into lookup tables with different input size.
APA, Harvard, Vancouver, ISO, and other styles
38

Prasad Arya, Govind, Devendra Prasad, and Sandeep Singh Rana. "An Improved Page Replacement Algorithm Using Block Retrieval of Pages." International Journal of Engineering & Technology 7, no. 4.5 (September 22, 2018): 32. http://dx.doi.org/10.14419/ijet.v7i4.5.20004.

Full text
Abstract:
The computer programmer write programming codes of any length without keeping in mind the available primary memory. This is possible if we use the concept of virtual memory. As the name suggests, virtual memory is a concept of executing a programming code of any size even having a primary memory of smaller size than the size of program to be executed. The virtual memory can be implemented using the concept of paging. The operating system allocates a number of memory frames to each program while loading into the memory. The programming code is equally divided into pages of same size as frame size. The size of pages and memory frames are retained equal for the better utilization of the memory. During the execution of program, every process is allocated limited number of memory frames; hence there is a need of page replacements. To overcome this limitation, a number of page replacement techniques had suggested by the researchers. In this paper, we have proposed an modified page replacement technique, which is based on the concept of block reading of pages from the secondary storage. The disc access is very slow as compared to the access from primary memory. Whenever there is a page fault, the required page is retrived from the secondary storage. The numerous page faults increase the execution time of process. In the proposed methodology, a number of pages, which is equal to the allotted memory frames, are read every time when there is a page fault instead of reading a single page at a time. If a block of pages has fetched from secondary storage, it will definitely increases the possibilities of page hit and as a result, it will improve the hit ratio for the processes.
APA, Harvard, Vancouver, ISO, and other styles
39

Khadka, Shauharda, Jen Jen Chung, and Kagan Tumer. "Neuroevolution of a Modular Memory-Augmented Neural Network for Deep Memory Problems." Evolutionary Computation 27, no. 4 (December 2019): 639–64. http://dx.doi.org/10.1162/evco_a_00239.

Full text
Abstract:
We present Modular Memory Units (MMUs), a new class of memory-augmented neural network. MMU builds on the gated neural architecture of Gated Recurrent Units (GRUs) and Long Short Term Memory (LSTMs), to incorporate an external memory block, similar to a Neural Turing Machine (NTM). MMU interacts with the memory block using independent read and write gates that serve to decouple the memory from the central feedforward operation. This allows for regimented memory access and update, giving our network the ability to choose when to read from memory, update it, or simply ignore it. This capacity to act in detachment allows the network to shield the memory from noise and other distractions, while simultaneously using it to effectively retain and propagate information over an extended period of time. We train MMU using both neuroevolution and gradient descent, and perform experiments on two deep memory benchmarks. Results demonstrate that MMU performs significantly faster and more accurately than traditional LSTM-based methods, and is robust to dramatic increases in the sequence depth of these memory benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
40

BEDIENT, RICHARD, MICHAEL FRAME, KEITH GROSS, JENNIFER LANSKI, and BRENDAN SULLIVAN. "HIGHER BLOCK IFS 1: MEMORY REDUCTION AND DIMENSION COMPUTATIONS." Fractals 18, no. 02 (June 2010): 145–55. http://dx.doi.org/10.1142/s0218348x10004804.

Full text
Abstract:
By applying a result from the theory of subshifts of finite type,1 we generalize the result of Frame and Lanski2 to IFS with multistep memory. Specifically, we show that for an IFS [Formula: see text] with m-step memory, there is an IFS with 1-step memory (though in general with many more transformations than [Formula: see text]) having the same attractor as [Formula: see text].
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Xiao Feng, Peng Fan, Xiao Hua Liu, Xing Chao Wang, Chuan Hu, Chun Xiang Liu, and Shi Guang Bie. "Parallel Rendering Strategies for 3D Emulational Scene of Live Working." Applied Mechanics and Materials 457-458 (October 2013): 1021–27. http://dx.doi.org/10.4028/www.scientific.net/amm.457-458.1021.

Full text
Abstract:
Because of abundant deep scene nodes in 3D emulational scene of live working, the existing three-dimensional scene data organization methods and rendering strategies have many flaws, such as the jumping of rendering and the delay of interactive response. A real-time rendering method for huge amount of urban data was presented utilizing the techniques such as identifying model that is based on multi-grid block partition, thread pool, caching and real time external memory scheduling algorithms. The whole scene was partitioned into blocks of different size and the blocks were arranged with multi-grid which is related to model ID and tile ID to accelerate model scheduling. Fast clipping was achieved through the nailing of position and direction of block-based view frustum, and touching task of data downloading off into thread pool executed in background which achieve the dynamic data loading and parallelism of three-dimensional scene rendering. To solve the choke point at computer hardware, in-out memory scheduling algorithms are adopted to eliminate invisible scene models and recycle dirty data in memory. Experimental results showed that the method is very efficient and suitable for applications in massive urban models rendering and interactive walkthrough.
APA, Harvard, Vancouver, ISO, and other styles
42

HUR, Jae Young. "Block Level TLB Coalescing for Buddy Memory Allocator." IEICE Transactions on Information and Systems E102.D, no. 10 (October 1, 2019): 2043–46. http://dx.doi.org/10.1587/transinf.2019edl8089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kim, Sik, Sun-Young Hwang, and Moon Jun Kang. "A Memory-Efficient Block-wise MAP Decoder Architecture." ETRI Journal 26, no. 6 (December 9, 2004): 615–21. http://dx.doi.org/10.4218/etrij.04.0103.0091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Neungsoo Park, Bo Hong, and V. K. Prasanna. "Tiling, block data layout, and memory hierarchy performance." IEEE Transactions on Parallel and Distributed Systems 14, no. 7 (July 2003): 640–54. http://dx.doi.org/10.1109/tpds.2003.1214317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Stark, W. E., and R. J. McEliece. "On the capacity of channels with block memory." IEEE Transactions on Information Theory 34, no. 2 (March 1988): 322–24. http://dx.doi.org/10.1109/18.2642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Neelamegam, Ramesh, Emily L. Ricq, Melissa Malvaez, Debasis Patnaik, Stephanie Norton, Stephen M. Carlin, Ian T. Hill, Marcelo A. Wood, Stephen J. Haggarty, and Jacob M. Hooker. "Brain-Penetrant LSD1 Inhibitors Can Block Memory Consolidation." ACS Chemical Neuroscience 3, no. 2 (December 14, 2011): 120–28. http://dx.doi.org/10.1021/cn200104y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Butka, Argjir, and Llukan Puka. "A Block Bootstrap Procedure for Long Memory Processes." International Journal of Mathematics Trends and Technology 14, no. 2 (October 25, 2014): 72–78. http://dx.doi.org/10.14445/22315373/ijmtt-v14p511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Cha, Dong Il, Hak Yong Kim, Keun Hyung Lee, Yong Chae Jung, Jae Whan Cho, and Byung Chul Chun. "Electrospun nonwovens of shape-memory polyurethane block copolymers." Journal of Applied Polymer Science 96, no. 2 (2005): 460–65. http://dx.doi.org/10.1002/app.21467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Takakubo, Hajime, Cong-Kha Pham, and Katsufusa Shono. "A bitmap memory bank which allows block accesses." Electronics and Communications in Japan (Part II: Electronics) 74, no. 8 (1991): 88–98. http://dx.doi.org/10.1002/ecjb.4420740811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Park, A., K. Balasubramanian, and R. J. Lipton. "Array access bounds for block storage memory systems." IEEE Transactions on Computers 38, no. 6 (June 1989): 909–13. http://dx.doi.org/10.1109/12.24305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography