To see the other types of publications on this topic, follow the link: Read only storage.

Journal articles on the topic 'Read only storage'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Read only storage.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Micic, Ljubomir, and Thomas Fischer. "Apparatus for the digital storage of audio signals employing read only memories." Journal of the Acoustical Society of America 94, no. 2 (August 1993): 1184. http://dx.doi.org/10.1121/1.406871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gambino, Richard J. "Optical Storage Disk Technology." MRS Bulletin 15, no. 4 (April 1990): 20–24. http://dx.doi.org/10.1557/s0883769400059911.

Full text
Abstract:
Optical storage of digital information has reached the consumer market in the form of the compact audio disk. In this technology, information is stored in the form of shallow pits embossed in a polymer surface. The surface is coated with a reflective thin metallic film, and the digital information, represented by the position and length of the pits, is read out optically with a focused, low-power (5 mW) laser beam. When used for information storage for a computer this device is called a CD-ROM, a Compact Digital-Read Only Memory. The user can only extract information (digital data) from the disk without changing or adding any data. That is, it is possible to “read” but not to “write” or “erase” information.While it is an advantage to have permanently stored information in some cases — for example when listening to Beethoven's Ninth Symphony—in other situations the read-only feature is not appropriate. For most computer applications, it is essential that the user be able to store information on the disk and read it back at will. For example, in a word processing task—such as typing this article — it is often necessary to store the document on a disk. Optical data storage for this purpose is available in the form of a Write Once Read Many times (WORM) optical disk drive. The operating principle in a WORM drive is to use a focused laser beam (20 – 40 mW) to make a permanent mark on a thin film on a disk. The information is then read out as a change in the optical properties of the disk, e.g., reflectivity or absorbance. These changes can take various forms: “hole burning” is the removal of material (typically a thin film of tellurium) by evaporation, melting or spalling — sometime s referred to as laser ablation; bubble or pit formation involves deformation of the surface, usually of a polymer overcoat on a metal reflector.
APA, Harvard, Vancouver, ISO, and other styles
3

Chakraborti, Anrin, and Radu Sion. "SqORAM: Read-Optimized Sequential Write-Only Oblivious RAM." Proceedings on Privacy Enhancing Technologies 2020, no. 1 (January 1, 2020): 216–34. http://dx.doi.org/10.2478/popets-2020-0012.

Full text
Abstract:
AbstractOblivious RAMs (ORAMs) allow a client to access data from an untrusted storage device without revealing the access patterns. Typically, the ORAM adversary can observe both read and write accesses. Write-only ORAMs target a more practical, multi-snapshot adversary only monitoring client writes – typical for plausible deniability and censorship-resilient systems. This allows write-only ORAMs to achieve significantly-better asymptotic performance. However, these apparent gains do not materialize in real deployments primarily due to the random data placement strategies used to break correlations between logical and physical names-paces, a required property for write access privacy. Random access performs poorly on both rotational disks and SSDs (often increasing wear significantly, and interfering with wear-leveling mechanisms).In this work, we introduce SqORAM, a new locality-preserving write-only ORAM that preserves write access privacy without requiring random data access. Data blocks close to each other in the logical domain land in close proximity on the physical media. Importantly, SqORAM maintains this data locality property over time, significantly increasing read throughput.A full Linux kernel-level implementation of SqORAM is 100x faster than non locality-preserving solutions for standard workloads and is 60-100% faster than the state-of-the-art for typical file system workloads.
APA, Harvard, Vancouver, ISO, and other styles
4

Jie, Song, Pei Jing, Xu Duan-Yi, Xiong Jian-Ping, Chen Ken, and Pan Long-Fa. "Pit Depth and Width Modulation Multilevel Run-Length Limited Read-Only Optical Storage." Chinese Physics Letters 23, no. 6 (May 30, 2006): 1504–6. http://dx.doi.org/10.1088/0256-307x/23/6/041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Zhiping, and Yu Ruan. "Optimization of information pit shape and read-out system in read-only and write-once optical storage systems." Applied Optics 27, no. 4 (February 15, 1988): 728. http://dx.doi.org/10.1364/ao.27.000728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Baek, Sung Hoon, and Ki-Woong Park. "A Durable Hybrid RAM Disk with a Rapid Resilience for Sustainable IoT Devices." Sensors 20, no. 8 (April 11, 2020): 2159. http://dx.doi.org/10.3390/s20082159.

Full text
Abstract:
Flash-based storage is considered to be a de facto storage module for sustainable Internet of things (IoT) platforms under a harsh environment due to its relatively fast speed and operational stability compared to disk storage. Although their performance is considerably faster than disk-based mechanical storage devices, the read and write latency still could not catch up with that of Random-access memory (RAM). Therefore, RAM could be used as storage devices or systems for time-critical IoT applications. Despite such advantages of RAM, a RAM-based storage system has limitations in its use for sustainable IoT devices due to its nature of volatile storage. As a remedy to this problem, this paper presents a durable hybrid RAM disk enhanced with a new read interface. The proposed durable hybrid RAM disk is designed for sustainable IoT devices that require not only high read/write performance but also data durability. It includes two performance improvement schemes: rapid resilience with a fast initialization and direct byte read (DBR). The rapid resilience with a fast initialization shortens the long booting time required to initialize the durable hybrid RAM disk. The new read interface, DBR, enables the durable hybrid RAM disk to bypass the disk cache, which is an overhead in RAM-based storages. DBR performs byte–range I/O, whereas direct I/O requires block-range I/O; therefore, it provides a more efficient interface than direct I/O. The presented schemes and device were implemented in the Linux kernel. Experimental evaluations were performed using various benchmarks at the block level till the file level. In workloads where reads and writes were mixed, the durable hybrid RAM disk showed 15 times better performance than that of Solid-state drive (SSD) itself.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, C. P., S. H. Lin, M. L. Hsieh, K. Y. Hsu, and T. C. Hsieh. "A Holographic Memory for Digital Data Storage." International Journal of High Speed Electronics and Systems 08, no. 04 (December 1997): 749–65. http://dx.doi.org/10.1142/s0129156497000317.

Full text
Abstract:
A read-only holographic memory for digital data storage is experimentally demonstrated. Techniques for coding and decoding of optical signals, and the interface techniques between the optical memory and a personal computer are described. The performance of the optical memory and the techniques for improving the bit error rate (BER) are presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Liangyu, Yuai Duan, Zhenzhen Xu, Jing Yuan, Yuping Dong, and Tianyu Han. "Stimuli-responsive fluorophores with aggregation-induced emission: implication for dual-channel optical data storage." Journal of Materials Chemistry C 4, no. 23 (2016): 5334–41. http://dx.doi.org/10.1039/c6tc01179a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Tianming, Jing Zhang, and Ningbo Hao. "Improving Read Performance with BP-DAGs for Storage-Efficient File Backup." Open Electrical & Electronic Engineering Journal 7, no. 1 (October 18, 2013): 90–97. http://dx.doi.org/10.2174/1874129001307010090.

Full text
Abstract:
The continued growth of data and high-continuity of application have raised a critical and mounting demand on storage-efficient and high-performance data protection. New technologies, especially the D2D (Disk-to-Disk) deduplication storage are therefore getting wide attention both in academic and industry in the recent years. Existing deduplication systems mainly rely on duplicate locality inside the backup workload to achieve high throughput but suffer from read performance degrading under conditions of poor duplicate locality. This paper presents the design and performance evaluation of a D2D-based de-duplication file backup system, which employs caching techniques to improve write throughput while encoding files as graphs called BP-DAGs (Bi-pointer-based Directed Acyclic Graphs). BP-DAGs not only satisfy the ‘unique’ chunk storing policy of de-duplication, but also help improve file read performance in case of poor duplicate locality workloads. Evaluation results show that the system can achieve comparable read performance than non de-duplication backup systems such as Bacula under representative workloads, and the metadata storage overhead for BP-DAGs are reasonably low.
APA, Harvard, Vancouver, ISO, and other styles
10

Lv, Yi, Qian Wang, Houpeng Chen, Chenchen Xie, Shenglan Ni, Xi Li, and Zhitang Song. "Enhancing the Data Reliability of Multilevel Storage in Phase Change Memory with 2T2R Cell Structure." Micromachines 12, no. 9 (September 9, 2021): 1085. http://dx.doi.org/10.3390/mi12091085.

Full text
Abstract:
Multilevel storage and the continuing scaling down of technology have significantly improved the storage density of phase change memory, but have also brought about a challenge, in that data reliability can degrade due to the resistance drift. To ensure data reliability, many read and write operation technologies have been proposed. However, they only mitigate the influence on data through read and write operations after resistance drift occurs. In this paper, we consider the working principle of multilevel storage for PCM and present a novel 2T2R structure circuit to increase the storage density and reduce the influence of resistance drift fundamentally. To realize 3-bit per cell storage, a wide range of resistances were selected as different states of phase change memory. Then, we proposed a 4:3 compressing encoding scheme to transform the output data into binary data states. Therefore, the designed 2T2R was proven to have optimized storage density and data reliability by monitoring the conductance distribution at four time points (1 ms, 1 s, 6 h, 12 h) in 4000 devices. Simulation results showed that the resistance drift of our proposed 2T2R structure can significantly improve the storage density of multilevel storage and increase the data reliability of phase change memory.
APA, Harvard, Vancouver, ISO, and other styles
11

Cheng, Wen, Chunyan Li, Lingfang Zeng, Yingjin Qian, Xi Li, and André Brinkmann. "NVMM-Oriented Hierarchical Persistent Client Caching for Lustre." ACM Transactions on Storage 17, no. 1 (February 2, 2021): 1–22. http://dx.doi.org/10.1145/3404190.

Full text
Abstract:
In high-performance computing (HPC), data and metadata are stored on special server nodes and client applications access the servers’ data and metadata through a network, which induces network latencies and resource contention. These server nodes are typically equipped with (slow) magnetic disks, while the client nodes store temporary data on fast SSDs or even on non-volatile main memory (NVMM). Therefore, the full potential of parallel file systems can only be reached if fast client side storage devices are included into the overall storage architecture. In this article, we propose an NVMM-based hierarchical persistent client cache for the Lustre file system (NVMM-LPCC for short). NVMM-LPCC implements two caching modes: a read and write mode (RW-NVMM-LPCC for short) and a read only mode (RO-NVMM-LPCC for short). NVMM-LPCC integrates with the Lustre Hierarchical Storage Management (HSM) solution and the Lustre layout lock mechanism to provide consistent persistent caching services for I/O applications running on client nodes, meanwhile maintaining a global unified namespace of the entire Lustre file system. The evaluation results presented in this article show that NVMM-LPCC can increase the average read throughput by up to 35.80 times and the average write throughput by up to 9.83 times compared with the native Lustre system, while providing excellent scalability.
APA, Harvard, Vancouver, ISO, and other styles
12

Haider, Syed, and Marten van Dijk. "Flat ORAM: A Simplified Write-Only Oblivious RAM Construction for Secure Processors." Cryptography 3, no. 1 (March 25, 2019): 10. http://dx.doi.org/10.3390/cryptography3010010.

Full text
Abstract:
Oblivious RAM (ORAM) is a cryptographic primitive which obfuscates the access patterns to a storage, thereby preventing privacy leakage. So far in the current literature, only ‘fully functional’ ORAMs are widely studied which can protect, at a cost of considerable performance penalty, against the strong adversaries who can monitor all read and write operations. However, recent research has shown that information can still be leaked even if only the write access pattern (not reads) is visible to the adversary. For such weaker adversaries, a fully functional ORAM turns out to be an overkill, causing unnecessary overheads. Instead, a simple ‘write-only’ ORAM is sufficient, and, more interestingly, is preferred as it can offer far better performance and energy efficiency than a fully functional ORAM. In this work, we present Flat ORAM: an efficient write-only ORAM scheme which outperforms the closest existing write-only ORAM called HIVE. HIVE suffers from performance bottlenecks while managing the memory occupancy information vital for correctness of the protocol. Flat ORAM introduces a simple idea of Occupancy Map (OccMap) to efficiently manage the memory occupancy information resulting in far better performance. Our simulation results show that, compared to HIVE, Flat ORAM offers 50 % performance gain on average and up to 80 % energy savings.
APA, Harvard, Vancouver, ISO, and other styles
13

Zang, Jie, Xiao Li Wang, and Zhong Hua Yan. "Design of a USB OTG Mass Storage Module Based on LM3S9B90." Advanced Materials Research 588-589 (November 2012): 735–38. http://dx.doi.org/10.4028/www.scientific.net/amr.588-589.735.

Full text
Abstract:
This paper introduces the method of design a USB On-The-Go (OTG) mass storage module according to USB 2.0 specification, which bases on embedded systems LM3S9B90. This module’s hardware system and software system is described, focusing on the implementation of USB host system. The module realizes USB OTG function, not only can realize functions of read and write USB mass storage devices, but also can exchange data with USB host as a USB mass storage device, can be used as an extension of USB system and has good application value.
APA, Harvard, Vancouver, ISO, and other styles
14

Nurdin, Hendra I., and John E. Gough. "Modular quantum memories using passive linear optics and coherent feedback." Quantum Information and Computation 15, no. 11&12 (September 2015): 1017–40. http://dx.doi.org/10.26421/qic15.11-12-9.

Full text
Abstract:
In this paper, we show that quantum memory for qudit states encoded in a single photon pulsed optical field has a conceptually simple modular realization using only passive linear optics and coherent feedback. We exploit the idea that two decaying optical cavities can be coupled in a coherent feedback configuration to create an internal mode of the coupled system which is isolated and decoherence-free for the purpose of qubit storage. The qubit memory can then be switched between writing/read-out mode and storage mode simply by varying the routing of certain freely propagating optical fields in the network. It is then shown that the qubit memories can be interconnected with one another to form a qudit quantum memory. We explain each of the phase of writing, storage, and read-out for this modular quantum memory scheme. The results point a way towards modular architectures for complex compound quantum memories.
APA, Harvard, Vancouver, ISO, and other styles
15

Coufal, Hans, Lisa Dhar, and C. Denis Mee. "Materials for Magnetic Data Storage: The Ongoing Quest for Superior Magnetic Materials." MRS Bulletin 31, no. 5 (May 2006): 374–78. http://dx.doi.org/10.1557/mrs2006.96.

Full text
Abstract:
AbstractFrom its inception until today, and for the foreseeable future, magnetic data storage on disks and tape has provided constantly increased storage density.This has required not only constant innovation, but also major breakthroughs in magnetic materials, both for the media and the read head. Today's disk and tape drives take advantage of novel nanoengineered composite magnetic materials and quantum mechanical processes.In this issue of MRS Bulletin, we present a number of review articles by some of the leaders in this rapidly moving field that highlight the key materials science accomplishments that have enabled the tremendous progress in hard disk drive and magnetic tape technologies.Individual articles describe the materials involved in state-of-the-art magnetic recording, advanced media for perpendicular magnetic recording, the materials challenges of achieving high performance in flexible media such as magnetic tape, the materials issues of read heads, and future avenues for magnetic storage beyond magnetic recording, such as nanowires and spintronics.
APA, Harvard, Vancouver, ISO, and other styles
16

RB, Madhumala, Sujan Chhetri, Akshatha KC, and Hitesh Jain. "Secure File Storage & Sharing on Cloud Using Cryptography." International Journal of Computer Science and Mobile Computing 10, no. 5 (May 30, 2021): 49–59. http://dx.doi.org/10.47760/ijcsmc.2021.v10i05.005.

Full text
Abstract:
In today’s world, simply having the capacity to transfer a file from one location to another isn’t enough. Businesses today face multiple security threats and a highly competitive environment. So they need a secure file transfer system to protect and reliably transfer their sensitive, business-critical data. Secure file transfer is a method of data sharing via a secure, reliable delivery method. Also, we use this between a client and a server. Cryptography is a technique that we use for securing information and communication in the presence of third parties. We use this technique to ensure that only those persons to whom the information is intended can read this. By using cryptography, we can prevent unauthorized users from accessing the information which is shared privately. In this paper, the plan proposed is to overcome the issues regarding the data that are being stored by the users on the cloud should be encrypted rather than storing them in a plain form such that the data will be protected from the attackers who are trying to read, delete or manipulate the data. Our application is focused on securely authenticating the user, before storing and sharing files, To create an application that lets a user encrypt and decrypt any type of file without any changes in the size during encryption & decryption, store every user data in the encrypted form on the cloud, to provide a communication medium between users via the chat application, to give direct access to the file for CRUD operation only to the owner.
APA, Harvard, Vancouver, ISO, and other styles
17

Wilton, Richard, and Alexander S. Szalay. "Arioc: High-concurrency short-read alignment on multiple GPUs." PLOS Computational Biology 16, no. 11 (November 9, 2020): e1008383. http://dx.doi.org/10.1371/journal.pcbi.1008383.

Full text
Abstract:
In large DNA sequence repositories, archival data storage is often coupled with computers that provide 40 or more CPU threads and multiple GPU (general-purpose graphics processing unit) devices. This presents an opportunity for DNA sequence alignment software to exploit high-concurrency hardware to generate short-read alignments at high speed. Arioc, a GPU-accelerated short-read aligner, can compute WGS (whole-genome sequencing) alignments ten times faster than comparable CPU-only alignment software. When two or more GPUs are available, Arioc's speed increases proportionately because the software executes concurrently on each available GPU device. We have adapted Arioc to recent multi-GPU hardware architectures that support high-bandwidth peer-to-peer memory accesses among multiple GPUs. By modifying Arioc's implementation to exploit this GPU memory architecture we obtained a further 1.8x-2.9x increase in overall alignment speeds. With this additional acceleration, Arioc computes two million short-read alignments per second in a four-GPU system; it can align the reads from a human WGS sequencer run–over 500 million 150nt paired-end reads–in less than 15 minutes. As WGS data accumulates exponentially and high-concurrency computational resources become widespread, Arioc addresses a growing need for timely computation in the short-read data analysis toolchain.
APA, Harvard, Vancouver, ISO, and other styles
18

Jayakumar, N., and A. M. Kulkarni. "A Simple Measuring Model for Evaluating the Performance of Small Block Size Accesses in Lustre File System." Engineering, Technology & Applied Science Research 7, no. 6 (December 18, 2017): 2313–18. http://dx.doi.org/10.48084/etasr.1557.

Full text
Abstract:
Storage performance is one of the vital characteristics of a big data environment. Data throughput can be increased to some extent using storage virtualization and parallel data paths. Technology has enhanced the various SANs and storage topologies to be adaptable for diverse applications that improve end to end performance. In big data environments the mostly used file systems are HDFS (Hadoop Distributed File System) and Lustre. There are environments in which both HDFS and Lustre are connected, and the applications directly work on Lustre. In Lustre architecture with out-of-band storage virtualization system, the separation of data path from metadata path is acceptable (and even desirable) for large files since one MDT (Metadata Target) open RPC is typically a small fraction of the total number of read or write RPCs. This hurts small file performance significantly when there is only a single read or write RPC for the file data. Since applications require data for processing and considering in-situ architecture which brings data or metadata close to applications for processing, how the in-situ processing can be exploited in Lustre is the domain of this dissertation work. The earlier research exploited Lustre supporting in-situ processing when Hadoop/MapReduce is integrated with Lustre, but still, the scope of performance improvement existed in Lustre. The aim of the research is to check whether it is feasible and beneficial to move the small files to the MDT so that additional RPCs and I/O overhead can be eliminated, and read/write performance of Lustre file system can be improved.
APA, Harvard, Vancouver, ISO, and other styles
19

Blomer, Jakob, Philippe Canal, Axel Naumann, and Danilo Piparo. "Evolution of the ROOT Tree I/O." EPJ Web of Conferences 245 (2020): 02030. http://dx.doi.org/10.1051/epjconf/202024502030.

Full text
Abstract:
The ROOT TTree data format encodes hundreds of petabytes of High Energy and Nuclear Physics events. Its columnar layout drives rapid analyses, as only those parts (“branches”) that are really used in a given analysis need to be read from storage. Its unique feature is the seamless C++ integration, which allows users to directly store their event classes without explicitly defining data schemas. In this contribution, we present the status and plans of the future ROOT 7 event I/O. Along with the ROOT 7 interface modernization, we aim for robust, where possible compile-time safe C++ interfaces to read and write event data. On the performance side, we show first benchmarks using ROOT’s new experimental I/O subsystem that combines the best of TTrees with recent advances in columnar data formats. A core ingredient is a strong separation of the high-level logical data layout (C++ classes) from the low-level physical data layout (storage backed nested vectors of simple types). We show how the new, optimized physical data layout speeds up serialization and deserialization and facilitates parallel, vectorized and bulk operations. This lets ROOT I/O run optimally on the upcoming ultra-fast NVRAM storage devices, as well as file-less storage systems such as object stores.
APA, Harvard, Vancouver, ISO, and other styles
20

Tsoukalas, Dimitris, S. Kolliopoulou, P. Dimitrakis, P. Normand, and M. C. Petty. "Nanoparticles for Charge Storage Using Hybrid Organic Inorganic Devices." Advances in Science and Technology 54 (September 2008): 451–57. http://dx.doi.org/10.4028/www.scientific.net/ast.54.451.

Full text
Abstract:
We present a concept for integration of low temperature fabricated memory devices in a 3-D architecture using a hybrid silicon-organic technology. The realization of electrically erasable read-only memory (EEPROM) like device is based on the fabrication of a V-groove SiGe MOSFET, the functionalization of a gate oxide followed by self-assembly of gold nanoparticles and finally, the deposition of an organic insulator by Langmuir-Blodgett (LB) technique. Such structures were processed at a temperature lower than 400°C following a process based on wafer bonding. The electrical characteristics of the final hybrid MISFET memory cells were evaluated in terms of memory window and program/erase voltage pulses. A model describing the memory characteristics, based on the electronic properties of the gate stack materials, is presented.
APA, Harvard, Vancouver, ISO, and other styles
21

Lai, Longbin, Linfeng Shen, Yanfei Zheng, Kefei Chen, and Jing Zhang. "Analysis for REPERA." International Journal of Cloud Applications and Computing 2, no. 1 (January 2012): 71–82. http://dx.doi.org/10.4018/ijcac.2012010105.

Full text
Abstract:
Distributed systems, especially those providing cloud services, endeavor to construct sufficiently reliable storage in order to attract more customers. Generally, pure replication and erasure code are widely adopted in distributed systems to guarantee reliable data storage, yet both of them contain some deficiencies. Pure replication consumes too much extra storage and bandwidth, while erasure code seems not so high-efficiency and only suitable for read-only context. The authors proposed REPERA as a hybrid mechanism combining pure replication and erasure code to leverage their advantages and mitigate their shortages. This paper qualitatively compares fault-resilient distributed architectures built with pure replication, erasure code and REPERA. The authors show that systems employing REPERA share with erasure-resilient systems a higher availability and more durable storage with similar space and bandwidth consumption when compared with replicated systems. The authors show that systems employing REPERA, on one hand, obtain higher availability while comparing to erasure-resilient systems, on the other hand, benefit from more durable storage while comparing to replicated systems. Furthermore, since REPERA was developed under the open platform, REPERA, the authors prepare an experiment to evaluate the performance of REPERA by comparing with the original system.
APA, Harvard, Vancouver, ISO, and other styles
22

Baszyński, Marcin, and Tomasz Siostrzonek. "Flywheel energy storage control system with the system operating status control via the Internet." Archives of Electrical Engineering 63, no. 3 (September 1, 2014): 457–67. http://dx.doi.org/10.2478/aee-2014-0033.

Full text
Abstract:
Abstract Modern electronics systems consist of not only with the power electronics converters, but also with the friendly user interface which allow you to read the operating parameters and change them. The simplest solution of the user interface is to use alphanumeric display which displays information about the state of the converter. With a few additional buttons you can change the settings. This solution is simple, inexpensive but allows only local control (within walking distance from the system) and the number of displayed information is low. You can create extensive menu, but it causes problems with access to information. This paper presents the example of a rotating energy storage universal solution which is lack of the above mentioned disadvantages
APA, Harvard, Vancouver, ISO, and other styles
23

He, Ming Xiang, Guan Li, and Xin Ming Lu. "A Method of Geospatial Data Files Conversion Based on Semantic." Applied Mechanics and Materials 303-306 (February 2013): 2221–26. http://dx.doi.org/10.4028/www.scientific.net/amm.303-306.2221.

Full text
Abstract:
Data conversion is a necessary step to establish a unified data storage and management mode. Based on the analysis of the structure of Shapefile and the principle of semantic conversion, this paper proposes a method of geospatial data files conversion based on semantic. This method to a relational database file shape file conversion, the application of this method can not only increase the data read speed, and ease of data management and sharing.
APA, Harvard, Vancouver, ISO, and other styles
24

Iraci, Joe. "Blu-Ray Media Stability and Suitability for Long-Term Storage." Restaurator. International Journal for the Preservation of Library and Archival Material 39, no. 2 (July 26, 2018): 129–55. http://dx.doi.org/10.1515/res-2017-0016.

Full text
Abstract:
Abstract The most recent generation of optical disc media available is the Blu-ray format. Blu-rays offer significantly more storage capacity than compact discs (CDs) and digital versatile discs (DVDs) and thus are an attractive option for the storage of large image or audio and video files. However, uncertainty exists on the stability and longevity of Blu-ray discs and the literature does not contain much information on these topics. In this study, the stabilities of Blu-ray formats such as read-only movie discs as well as many different brands of recordable and erasable media were evaluated. Testing involved the exposure of samples to conditions of 80 °C and 85 % relative humidity for intervals up to 84 days. Overall, the stability of the Blu-ray formats was poor with many discs significantly degraded after only 21 days of accelerated ageing. In addition to large increases in error rates, many discs showed easily identifiable visible degradation in several different forms. In a comparison with other optical disc formats examined previously, Blu-ray stability ranked very low. Other data from the study indicated that recording Blu-ray media with low initial error rates is challenging for some brands at this time, which is a factor that ultimately affects longevity.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Tianyu, Matthew Butrovich, Amadou Ngom, Wan Shen Lim, Wes McKinney, and Andrew Pavlo. "Mainlining databases." Proceedings of the VLDB Endowment 14, no. 4 (December 2020): 534–46. http://dx.doi.org/10.14778/3436905.3436913.

Full text
Abstract:
The proliferation of modern data processing tools has given rise to open-source columnar data formats. These formats help organizations avoid repeated conversion of data to a new format for each application. However, these formats are read-only, and organizations must use a heavy-weight transformation process to load data from on-line transactional processing (OLTP) systems. As a result, DBMSs often fail to take advantage of full network bandwidth when transferring data. We aim to reduce or even eliminate this overhead by developing a storage architecture for in-memory database management systems (DBMSs) that is aware of the eventual usage of its data and emits columnar storage blocks in a universal open-source format. We introduce relaxations to common analytical data formats to efficiently update records and rely on a lightweight transformation process to convert blocks to a read-optimized layout when they are cold. We also describe how to access data from third-party analytical tools with minimal serialization overhead. We implemented our storage engine based on the Apache Arrow format and integrated it into the NoisePage DBMS to evaluate our work. Our experiments show that our approach achieves comparable performance with dedicated OLTP DBMSs while enabling orders-of-magnitude faster data exports to external data science and machine learning tools than existing methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Naderi, Hamid, and Behzad Kiani. "Security Challenges in Android mHealth Apps Permissions: A Case Study of Persian Apps." Frontiers in Health Informatics 9, no. 1 (September 2, 2020): 41. http://dx.doi.org/10.30699/fhi.v9i1.224.

Full text
Abstract:
Introduction: In this study, Persian Android mobile health (mhealth) applications were studied to describe usage of dangerous permissions in health related mobile applications. So the most frequently normal and dangerous permissions used in mhealth applications were reviewed.Materials and Methods: We wrote a PHP script to crawl information of Android apps in “health” and “medicine” categories from Cafebazaar app store. Then permission information of these application were extracted.Results: 11627 permissions from 3331 studied apps were obtained. There was at least one dangerous permission in 48% of reviewed apps. 41% of free applications, 53% of paid applications and 71% of in-purchase applications contained dangerous permissions. 1321 applications had writing permission to external storage of phone (40%), 1288 applications had access to read from external storage (39%), 422 applications could read contact list and ongoing calls (13%) and 188 applications were allowed to access phone location (5%).Conclusion: Most of Android permissions are harmless but significant number of the apps have at least one dangerous permission which increase the security risk. So paying attention to the permissions requested in the installation step is the best way to ensure that the application installed on your phone can only access what you want.
APA, Harvard, Vancouver, ISO, and other styles
27

Chen, Zheng Guo, Nong Xiao, Fang Liu, Yu Xuan Xing, and Zhen Sun. "Using FPGA to Accelerate Deduplication on High-Performance SSD." Advanced Materials Research 1042 (October 2014): 212–17. http://dx.doi.org/10.4028/www.scientific.net/amr.1042.212.

Full text
Abstract:
Data deduplication technology applied in solid state disks (SSD), can reduce the amount of write operations and garbage collection, and thus improve writing performance and prolong lifetime. With the significant increase of write performance onto SSD, whether deduplication based on SSD could be a performance bottleneck of SSD comes to a spot worthy of our attention. To this end, this paper, firstly, performs an experiment on achieving deduplication via software method, and reveals that software-based deduplication decreases SSD's read and write performance. And then a hardware-based deduplication with details is proposed and implemented to accelerate deduplication using FPGA, and expected results are achieved. Finally, we come to the conclusion that hardware-based deduplication can not only guarantee read and write performance of SSD, but also save storage capacity and enhance endurance.
APA, Harvard, Vancouver, ISO, and other styles
28

Mkrtchyan, Tigran, Olufemi Adeyemi, Patrick Fuhrmann, Vincent Garonne, Dmitry Litvintsev, Paul Millar, Albert Rossi, et al. "dCache - joining the noWORM storage club." EPJ Web of Conferences 214 (2019): 04048. http://dx.doi.org/10.1051/epjconf/201921404048.

Full text
Abstract:
For over a decade, dCache.ORG has provided robust software, called dCache, that is used at more than 80 universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments and many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from all-in-one Raspberry-Pi up to hundreds of nodes in multi-petabyte infrastructures. The life cycle of scientific data is well defined - collected, processed, archived and finally deleted, when it’s not needed anymore. Moreover, during all those stages the data is never modified: either the original data is used, or new derived data is produced. With this knowledge, dCache was designed to handle immutable files as efficiently as possible. Data replication, HSM connectivity and data-server independent operations are only possible due to the immutable nature of stored data. Nowadays many commercial vendors provide such write-once-read-many or WORM storage systems, as they become more and more demanded with grown demand of audio, photo and video content in the web. On the other hand by providing standard NFSv4.1 interface dCache is often used as a general-purpose file-system, especially by new communities, like photon scientists or microbiologists. Although many users are aware of data immutability, some applications and use cases still require in-place updates of stored files. To satisfy new requirements some fundamental changes have to be applied to dCache’s core design. However, new developments must not compromise any aspect of existing functionality. In this presentation we will show new developments in dCache to turn it into a regular file system. We will discuss the challenges to build a distributed storage system, ‘life’ with POSIX compliance, handling of multiple replicas and backward compatibility by providing WORM and noWORM capabilities within the same storage system.
APA, Harvard, Vancouver, ISO, and other styles
29

Huang, Xue Mei, and Jin Chuan Wang. "Rapid Extraction and Compression of DICOM Data for Medical Image Geometric Modeling." Advanced Materials Research 433-440 (January 2012): 7511–15. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.7511.

Full text
Abstract:
This paper presents a method of extracting and compressing required data from the DCM file for medical image geometric modeling. According to the characteristics of DICOM data, combining the idea of run-length coding with block coding, the rapid data compression and storage in RAM was realized finally. Compared with other coding methods, the encoding approach for DICOM data in this paper, not only saves the memory space and improves transmission efficiency, but also can read the required a single pixel, or part of pixel data from the compressed data conveniently.
APA, Harvard, Vancouver, ISO, and other styles
30

Xiao, Chuqiao, Yefeng Xia, Qian Zhang, Xueqing Gong, and Liyan Zhu. "CBase-EC: Achieving Optimal Throughput-Storage Efficiency Trade-Off Using Erasure Codes." Electronics 10, no. 2 (January 8, 2021): 126. http://dx.doi.org/10.3390/electronics10020126.

Full text
Abstract:
Many distributed database systems that guarantee high concurrency and scalability adopt read-write separation architecture. Simultaneously, these systems need to store massive amounts of data daily, requiring different mechanisms for storing and accessing data, such as hot and cold data access strategies. Unlike distributed storage systems, the distributed database splits a table into sub-tables or shards, and the request frequency of each sub-table is not the same within a specific time. Therefore, it is not only necessary to design hot-to-cold approaches to reduce storage overhead, but also cold-to-hot methods to ensure high concurrency of those systems. We present a new redundant strategy named CBase-EC, using erasure codes to trade the performances of transaction processing and storage efficiency for CBase database systems developed for financial scenarios of the Bank. Two algorithms are proposed: the hot-cold tablets (shards) recognition algorithm and the hot-cold dynamic conversion algorithm. Then we adopt two optimization approaches to improve CBase-EC performance. In the experiment, we compare CBase-EC with three-replicas in CBase. The experimental results show that although the transaction processing performance declined by no more than 6%, the storage efficiency increased by 18.4%.
APA, Harvard, Vancouver, ISO, and other styles
31

Xiao, Chuqiao, Yefeng Xia, Qian Zhang, Xueqing Gong, and Liyan Zhu. "CBase-EC: Achieving Optimal Throughput-Storage Efficiency Trade-Off Using Erasure Codes." Electronics 10, no. 2 (January 8, 2021): 126. http://dx.doi.org/10.3390/electronics10020126.

Full text
Abstract:
Many distributed database systems that guarantee high concurrency and scalability adopt read-write separation architecture. Simultaneously, these systems need to store massive amounts of data daily, requiring different mechanisms for storing and accessing data, such as hot and cold data access strategies. Unlike distributed storage systems, the distributed database splits a table into sub-tables or shards, and the request frequency of each sub-table is not the same within a specific time. Therefore, it is not only necessary to design hot-to-cold approaches to reduce storage overhead, but also cold-to-hot methods to ensure high concurrency of those systems. We present a new redundant strategy named CBase-EC, using erasure codes to trade the performances of transaction processing and storage efficiency for CBase database systems developed for financial scenarios of the Bank. Two algorithms are proposed: the hot-cold tablets (shards) recognition algorithm and the hot-cold dynamic conversion algorithm. Then we adopt two optimization approaches to improve CBase-EC performance. In the experiment, we compare CBase-EC with three-replicas in CBase. The experimental results show that although the transaction processing performance declined by no more than 6%, the storage efficiency increased by 18.4%.
APA, Harvard, Vancouver, ISO, and other styles
32

Khuyen, Nguyen Quang, Rudolf Kiefer, Zane Zondaka, Gholamreza Anbarjafari, Anna-Liisa Peikolainen, Toribio F. Otero, and Tarmo Tamm. "Multifunctionality of Polypyrrole Polyethyleneoxide Composites: Concurrent Sensing, Actuation and Energy Storage." Polymers 12, no. 9 (September 10, 2020): 2060. http://dx.doi.org/10.3390/polym12092060.

Full text
Abstract:
In films of conducting polymers, the electrochemical reaction(s) drive the simultaneous variation of different material properties (reaction multifunctionality). Here, we present a parallel study of actuation-sensing-energy storage triple functionality of polypyrrole (PPy) blends with dodecylbenzenesulfonate (DBS-), PPy/DBS, without and with inclusion of polyethyleneoxide, PPy-PEO/DBS. The characterization of the response of both materials in aqueous solutions of four different salts indicated that all of the actuating, sensing and charge storage responses were, independent of the electrolyte, present for both materials, but stronger for the PPy-PEO/DBS films: 1.4× higher strains, 1.3× higher specific charge densities, 2.5× higher specific capacitances and increased ion-sensitivity towards the studied counterions. For both materials, the reaction energy, the material potential and the strain variations adapt to and sense the electrical and chemical (exchanged cation) conditions. The driving and the response of actuation, sensing and charge can be controlled/read, simultaneously, via just two connecting wires. Only the cooperative actuation of chemical macromolecular motors from functional cells has such chemical multifunctionality.
APA, Harvard, Vancouver, ISO, and other styles
33

Ma, Kainan, Ming Liu, Tao Li, Yibo Yin, and Hongda Chen. "A Low-Cost Improved Method of Raw Bit Error Rate Estimation for NAND Flash Memory of High Storage Density." Electronics 9, no. 11 (November 12, 2020): 1900. http://dx.doi.org/10.3390/electronics9111900.

Full text
Abstract:
Cells wear fast in NAND flash memory of high storage density (HSD), so it is very necessary to have a long-term frequent in-time monitoring on its raw bit error rate (RBER) changes through a fast RBER estimation method. As the flash of HSD already has relatively lower reading speed, the method should not further degrade its read performance. This paper proposes an improved estimation method utilizing known data comparison, includes interleaving to balance the uneven error distribution in the flash of HSD, a fast RBER estimation module to make the estimated RBER highly linearly correlated with the actual RBER, and enhancement strategies to accelerate the decoding convergence of low-density parity-check (LDPC) codes and thereby make up the rate penalty caused by the known data. Experimental results show that when RBER is close to the upper bound of LDPC code, the reading efficiency can be increased by 35.8% compared to the case of no rate penalty. The proposed method only occupies 0.039mm2 at 40nm process condition. Hence, the fast, read-performance-improving, and low-cost method is of great application potential on RBER monitoring in the flash of HSD.
APA, Harvard, Vancouver, ISO, and other styles
34

Holomany, Mark, and Ron Jenkins. "Use of the Crystal Data File on CD-ROM." Advances in X-ray Analysis 32 (1988): 539–44. http://dx.doi.org/10.1154/s0376030800020875.

Full text
Abstract:
AbstractWe have recently described the use of the Compact Disk Read Only Memory disk for the storage of data in the Powder Diffraction File (PDF). This work has now been extended to include the NBS (1987) Crystal Data File (CDF). The CDF contains 115,753 entries of which 59,613 are Inorganic materials and 56,140 organic. The data base can be accessed either by means of bit maps built on chemistry and subfile restrictions, or by means of a Boolean search system allowing combinations of search parameters including: chemistry; space group; cell volume; density and unit cell data.
APA, Harvard, Vancouver, ISO, and other styles
35

Tu, Tengfei, Lu Rao, Hua Zhang, Qiaoyan Wen, and Jia Xiao. "Privacy-Preserving Outsourced Auditing Scheme for Dynamic Data Storage in Cloud." Security and Communication Networks 2017 (2017): 1–17. http://dx.doi.org/10.1155/2017/4603237.

Full text
Abstract:
As information technology develops, cloud storage has been widely accepted for keeping volumes of data. Remote data auditing scheme enables cloud user to confirm the integrity of her outsourced file via the auditing against cloud storage, without downloading the file from cloud. In view of the significant computational cost caused by the auditing process, outsourced auditing model is proposed to make user outsource the heavy auditing task to third party auditor (TPA). Although the first outsourced auditing scheme can protect against the malicious TPA, this scheme enables TPA to have read access right over user’s outsourced data, which is a potential risk for user data privacy. In this paper, we introduce the notion of User Focus for outsourced auditing, which emphasizes the idea that lets user dominate her own data. Based on User Focus, our proposed scheme not only can prevent user’s data from leaking to TPA without depending on data encryption but also can avoid the use of additional independent random source that is very difficult to meet in practice. We also describe how to make our scheme support dynamic updates. According to the security analysis and experimental evaluations, our proposed scheme is provably secure and significantly efficient.
APA, Harvard, Vancouver, ISO, and other styles
36

Rizkiyatussani, Her Gumiwang Ariswati, and Syaifudin. "Five Channel Temperature Calibrator Using Thermocouple Sensors Equipped With Data Storage." Journal of Electronics, Electromedical Engineering, and Medical Informatics 1, no. 1 (July 22, 2019): 1–5. http://dx.doi.org/10.35882/jeeemi.v1i1.1.

Full text
Abstract:
A temperature calibration device is a tool used to measure the accuracy of a temperature-related device such as a sterilisator. This temperature calibration device is needed when the temperature in the sterilisator is not linear. In this calibration tool the sensor used is a type-k thermocouple that is inserted into the media to be measured then the temperature results will be read. This tool is designed using pre-experimental methods with the type of after only design research. In this tool is equipped with storage on the micro sd card and also conversion mode to convert temperature results from Celsius to Rheamur, Farenheit and Kelvin. Temperature results will be displayed on a 4x20 LCD and processed using Arduino UNO. This module can be used in medical equipment calibration laboratories. After testing the thesis module with a comparison device from BPFK, the biggest error is obtained at 1% at 50 ° Celsius, 100 ° Celsius and 150 ° Celsius. The smallest percentage of error is 0% at 50 ° Celsius and 150 ° Celsius. It can be concluded that the tool "Temperature Calibrator (5 Channels) Using Thermocouple Equipped with Data Storage.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Rongjie, Junyi Li, Yang Bai, Tianyi Zang, and Yadong Wang. "BdBG: a bucket-based method for compressing genome sequencing data with dynamic de Bruijn graphs." PeerJ 6 (October 19, 2018): e5611. http://dx.doi.org/10.7717/peerj.5611.

Full text
Abstract:
Dramatic increases in data produced by next-generation sequencing (NGS) technologies demand data compression tools for saving storage space. However, effective and efficient data compression for genome sequencing data has remained an unresolved challenge in NGS data studies. In this paper, we propose a novel alignment-free and reference-free compression method, BdBG, which is the first to compress genome sequencing data with dynamic de Bruijn graphs based on the data after bucketing. Compared with existing de Bruijn graph methods, BdBG only stored a list of bucket indexes and bifurcations for the raw read sequences, and this feature can effectively reduce storage space. Experimental results on several genome sequencing datasets show the effectiveness of BdBG over three state-of-the-art methods. BdBG is written in python and it is an open source software distributed under the MIT license, available for download at https://github.com/rongjiewang/BdBG.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Zhisheng, Renjun Song, Qiang Huo, Qirui Ren, Chenrui Zhang, Linan Li, and Feng Zhang. "Analysis of Leakage Current of HfO2/TaOx-Based 3-D Vertical Resistive Random Access Memory Array." Micromachines 12, no. 6 (May 26, 2021): 614. http://dx.doi.org/10.3390/mi12060614.

Full text
Abstract:
Three-dimensional vertical resistive random access memory (VRRAM) is proposed as a promising candidate for increasing resistive memory storage density, but the performance evaluation mechanism of 3-D VRRAM arrays is still not mature enough. The previous approach to evaluating the performance of 3-D VRRAM was based on the write and read margin. However, the leakage current (LC) of the 3-D VRRAM array is a concern as well. Excess leakage currents not only reduce the read/write tolerance and liability of the memory cell but also increase the power consumption of the entire array. In this article, a 3-D circuit HSPICE simulation is used to analyze the impact of the array size and operation voltage on the leakage current in the 3-D VRRAM architecture. The simulation results show that rapidly increasing leakage currents significantly affect the size of 3-D layers. A high read voltage is profitable for enhancing the read margin. However, the leakage current also increases. Alleviating this conflict requires a trade-off when setting the input voltage. A method to improve the array read/write efficiency is proposed by analyzing the influence of the multi-bit operations on the overall leakage current. Finally, this paper explores different methods to reduce the leakage current in the 3-D VRRAM array. The leakage current model proposed in this paper provides an efficient performance prediction solution for the initial design of 3-D VRRAM arrays.
APA, Harvard, Vancouver, ISO, and other styles
39

Cao, Chan, Lucien F. Krapp, Abdelaziz Al Ouahabi, Niklas F. König, Nuria Cirauqui, Aleksandra Radenovic, Jean-François Lutz, and Matteo Dal Peraro. "Aerolysin nanopores decode digital information stored in tailored macromolecular analytes." Science Advances 6, no. 50 (December 2020): eabc2661. http://dx.doi.org/10.1126/sciadv.abc2661.

Full text
Abstract:
Digital data storage is a growing need for our society and finding alternative solutions than those based on silicon or magnetic tapes is a challenge in the era of “big data.” The recent development of polymers that can store information at the molecular level has opened up new opportunities for ultrahigh density data storage, long-term archival, anticounterfeiting systems, and molecular cryptography. However, synthetic informational polymers are so far only deciphered by tandem mass spectrometry. In comparison, nanopore technology can be faster, cheaper, nondestructive and provide detection at the single-molecule level; moreover, it can be massively parallelized and miniaturized in portable devices. Here, we demonstrate the ability of engineered aerolysin nanopores to accurately read, with single-bit resolution, the digital information encoded in tailored informational polymers alone and in mixed samples, without compromising information density. These findings open promising possibilities to develop writing-reading technologies to process digital data using a biological-inspired platform.
APA, Harvard, Vancouver, ISO, and other styles
40

Gao, Jintao, Wenjie Liu, and Zhanhuai Li. "A Strategy of Data Synchronization in Distributed System with Read Separating from Write." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 38, no. 1 (February 2020): 209–15. http://dx.doi.org/10.1051/jnwpu/20203810209.

Full text
Abstract:
Read separating from write is a strategy that NewSQL adopts to incorporate the advantages of traditional relation database and NoSQL database. Under this architecture, baseline data is split into multiple partitions stored at distributed physical nodes, while delta data is stored at single transaction node. For reducing the pressure of transaction node and improving the query performance, delta data needs to be synchronized into storage nodes. The current strategies trigger the procedure of data synchronization per partition, meaning that unchanged partitions will also participate in data synchronization, which consumes extra network cost, local IO and space resources. For improving the efficiency of data synchronization meanwhile mitigating space utilization, the fine-grained data synchronization strategy is proposed, whose main idea includes that fine-grained logical partitions upon original coarse-grained partitions is established, providing more correct synchronized unit; the delta data sensing strategy is introduced, which records the mapping between changed partitions and its delta data; instead of partition driven, the data synchronization through the delta-broadcasting mechanism is driven, constraining that only changed partitions can participate in data synchronization. The fine-grained data synchronization strategy on Oceanbase is implemented, which is a distributed database with read separating from write, and the results show that our strategy is better than other strategies in efficiency of data synchronizing and space utilization.
APA, Harvard, Vancouver, ISO, and other styles
41

Deutsch, E. S. "A Review of Some Electronic Text-Document Handling, Storage and Retrieval Systems." Journal of Information Technology 1, no. 2 (June 1986): 39–45. http://dx.doi.org/10.1177/026839628600100209.

Full text
Abstract:
This paper investigates some of the currently available optical disk storage and retrieval systems, image manipulation systems and OCR systems. Future developments are presented and an attempt at outlining a longer term trend is made. The main conclusions of the paper are as follows: 1. Optical disk systems which are currently available are costly and are accompanied by excessive software and hardware capabilities which might be beyond the needs of the straightforward document storage and retrieval application. A tailor-made system to suit a specific application might be the route to follow provided read-only and multiple access operations are required and the optical system has a definite overall performance advantage over-microform. 2. In general, the document handling times of both the scanners and the printers of optical systems present a constraint on their continued rapid operation. 3. For general applications it might be advisable to wait for at least a year or two by which time erasable disk media should be available and some degree of disk standardization will have evolved. Costs however could still be a factor at that time. 4. The office-supplies industry is not expecting optical systems to have an appreciable effect on the ‘paperless office’ before 1990. 5. Image manipulation systems currently available are too generalized, slow and require excessive computer storage. Their range of performance is somewhat limited. Should such a system be required, it would be best to develop application-specific software taking advantage of computer configuration.
APA, Harvard, Vancouver, ISO, and other styles
42

Diepenbroek, Michael, Dieter Fütterer, Hannes Grobe, Heinz Miller, Manfred Reinke, and Rainer Sieger. "PANGAEA information system for glaciological data management." Annals of Glaciology 27 (1998): 655–60. http://dx.doi.org/10.3189/1998aog27-1-655-660.

Full text
Abstract:
Specific parameters determined from continental ice sheet or glacier cores can be used to reconstruct former climate. Tο use this scientific resource effectively, an information system is needed which guarantees consistent long-term data storage and provides easy access. Such a system, to archive any data of paleoclimatic relevance, together with the related metadata, raw data and evaluated paleoclimatic data, is presented. It is based on a relational database and provides standardized import and export routines, easy access with uniform retrieval functions and tools for visualizing the data. The network is designed as a client-server system, providing access through the Internet with proprietary client software including a high functionality or read-only access to published data via the World Wide Web (www.pangaea.de).
APA, Harvard, Vancouver, ISO, and other styles
43

Sheng, Xiaohai, Aidong Peng, Hongbing Fu, Jiannian Yao, Yuanyuan Liu, and Yaobing Wang. "Reversible fluorescence modulation based on photochromic diarylethene and fluorescent coumarin." Journal of Materials Research 22, no. 6 (June 2007): 1558–63. http://dx.doi.org/10.1557/jmr.2007.0199.

Full text
Abstract:
A fluorescence switch by the photoisomerization of a photochromic compound in CH3CN and in a polymer film using a bistable photochromic (1,2-bis(2-methylbenzo[b]thiophen-3-yl) hexafluorocyclopentene) (BTF6) and a fluorescent 3-(2-benzothiazolyl)-7-(diethylamino) coumarin (coumarin6) was demonstrated. Because only the closed form of BTF6 serves as a fluorescence quencher of coumarin6, and the read (406 nm), write (254 nm), and erase (>500 nm) wavelengths are well-separated, a reversible modulation of the fluorescence of coumarin6 with high contrast and high sensitivity is expected to be realized. This system may represent an alternative to the fluorescence switches that are based on covalent systems in the potentially long-term optical data or image storage schemes utilizing luminescence intensity readout.
APA, Harvard, Vancouver, ISO, and other styles
44

N. Nasyrov, Iskandar, Ildar I. Nasyrov, Rustam I. Nasyrov, and Bulat A. Khairullin. "Data Mining for Information Storage Reliability Assessment by Relative Values." International Journal of Engineering & Technology 7, no. 4.7 (September 27, 2018): 204. http://dx.doi.org/10.14419/ijet.v7i4.7.20545.

Full text
Abstract:
The data ambiguity problem for heterogeneous sets of equipment reliability indicators is considered. In fact, the same manufacturers do not always unambiguously fill the SMART parameters with the corresponding values for their different models of hard disk drives. In addition, some of the parameters are sometimes empty, while the other parameters have only zero values.The scientific task of the research consists in the need to define such a set of parameters that will allow us to obtain a comparative assessment of the reliability of each individual storage device of any model of any manufacturer for its timely replacement.The following conditions were used to select the parameters suitable for evaluating their relative values:1) The parameter values for normally operating drives should always be greater or lower than for the failed ones;2) The monotonicity of changes in the values of parameters in the series should be observed: normally working, withdrawn prematurely, failed;3) The first two conditions must be fulfilled both in general and in particular, for example, for the drives of each brand separately.Separate averaging of the values for normally operating, early decommissioned and failed storage media was performed. The maximum of these three values was taken as 100%. The relative distribution of values for each parameter was studied.Five parameters were selected (5 – “Reallocated sectors count”, 7 – “Seek error rate”, 184 – “End-to-end error”, 196 – “Reallocation event count”, 197 – “Current pending sector count”, plus another four (1 – “Raw read error rate”, 10 – “Spin-up retry counts”, 187 – “Reported uncorrectable errors”, 198 – “Uncorrectable sector counts”), which require more careful analysis, and one (194 – “Hard disk assembly temperature”) for prospective use in solid-state drives, as a result of the relative value study of their suitability for use upon evaluating the reliability of data storage devices.
APA, Harvard, Vancouver, ISO, and other styles
45

PARKIN, STUART. "MAGNETIC RACE-TRACK — A NOVEL STORAGE CLASS SPINTRONIC MEMORY." International Journal of Modern Physics B 22, no. 01n02 (January 20, 2008): 117–18. http://dx.doi.org/10.1142/s0217979208046190.

Full text
Abstract:
A proposal for a novel storage-class memory is described in which magnetic domains are used to store information in a "magnetic race-track".1 The magnetic race-track shift register storage memory promises a solid state memory with storage capacities and cost rivaling that of magnetic disk drives but with much improved performance and reliability. The magnetic race track is comprised of tall columns of magnetic material arranged perpendicularly to the surface of a silicon wafer. The domains are moved up and down the race-track by nanosecond long current pulses using the phenomenon of spin momentum transfer. The domain walls in the magnetic race-track are read using magnetic tunnel junction magnetoresistive sensing devices arranged in the silicon substrate. Recent progress in developing magnetic tunnel junction devices with giant tunneling magnetoresistance exceeding 350% at room temperature will be mentioned.2 Experiments exploring the current induced motion and depinning of domain walls in magnetic nano-wires with artificial pinning sites will be discussed. The domain wall structure, whether vortex or transverse, and the magnitude of the pinning potential is shown to have surprisingly little effect on the current driven dynamics of the domain wall motion.3 By contrast the motion of DWs under nanosecond long current pulses is surprisingly sensitive to their length.4 In particular, we find that the probability of dislodging a DW, confined to a pinning site in a permalloy nanowire, oscillates with the length of the current pulse, with a period of just a few nanoseconds. Using an analytical model and micromagnetic simulations we show that this behaviour is connected to a current induced oscillatory motion of the DW. The period is determined by the DW mass and the curvature of the confining potential. When the current is turned off during phases of the DW motion when the DW has enough momentum, there is a boomerang effect that can drive the DW out of the confining potential in the opposite direction to the flow of spin angular momentum. Note from Publisher: This article contains the abstract only.
APA, Harvard, Vancouver, ISO, and other styles
46

Formato, Valerio. "A plugin-based approach to data analysis for the AMS experiment on the ISS." EPJ Web of Conferences 214 (2019): 05038. http://dx.doi.org/10.1051/epjconf/201921405038.

Full text
Abstract:
In many HEP experiments a typical data analysis workflow requires each user to read the experiment data in order to extract meaningful information and produce relevant plots for the considered analysis. Multiple users accessing the same data result in a redundant access to the data itself, which could be factorized effectively improving the CPU efficiency of the analysis jobs and relieving stress from the storage infrastructure. To address this issue we present a modular and lightweight solution where the users code is embedded in different "analysis plugins" which are then collected and loaded at runtime for execution, where the data is read only once and shared between all the different plugins. This solution was developed for one of the data analysis groups within the AMS collaboration but is easily extendable to all kinds of analyses and workloads that need I/O access on AMS data or custom data formats and can even adapted with little effort to another HEP experiment data. This framework could then be easily embedded into a "analysis train" and we will discuss a possible implementation and different ways to optimise CPU efficiency and execution time.
APA, Harvard, Vancouver, ISO, and other styles
47

Zou, Jiu Peng, Yu Qiang Dai, Xue Wu Liu, Li Ming Zhang, and Feng Xia Liu. "The Programming Algorithm Based on Embedded System for the Output Conversion of the Humidity & Temperature Sensor SHTxx." Advanced Engineering Forum 6-7 (September 2012): 294–98. http://dx.doi.org/10.4028/www.scientific.net/aef.6-7.294.

Full text
Abstract:
In order to avoid the digital humidity & temperature sensor SHTxx’s output values conversion consume many quantity of storage location, and spend more operation time, a group new type of conversion polynomials, and corresponding programming algorithm were deduced and tested. The polynomials are precise equivalent to the conversion formulas provided by the manufacturers, but contain only Binary fixed-point integer, fractional part, and 2N. Using fixed-point calculations and shift operations instead of floating-point calculations, the results of program code reduction amount of 60 percent, and computing speed faster nearly 4 times than the original algorithm are obtained. Furthermore, a kind of speedy and unified CRC algorithm for read-out data of the sensor is proposed. The novel programming algorithm makes the output conversion more simplified, so it could pave the way for the low-end embedded applications of SHTxx.
APA, Harvard, Vancouver, ISO, and other styles
48

ZOU, JING, HAO CHEN, QIYANG ZHANG, YANKANG, and DAN XIA. "FAST CONE-BEAM CT IMAGE RECONSTRUCTION BASED ON BPF ALGORITHM: APPLICATION TO ORTHO-CT." International Journal of Computational Methods 11, no. 04 (August 2014): 1350067. http://dx.doi.org/10.1142/s0219876213500679.

Full text
Abstract:
Multi-GPUs accelerated BPF algorithm is developed for improving computational efficiency. Three major acceleration techniques are introduced: (1) dividing reconstructed volume into subsets vertically which reduces the computational cost on the boundary term between parallel chords; (2) transposed method is used to avoid low efficiency of access in global memory which is caused by different chords selection; (3) optimized memory allocation scheme is adopted. Experimental data are used to evaluate the image quality and reconstruction time. It takes only 4.118 s to reconstruct a volume image of 512 × 512 × 512 with 360 projection data of 512 × 512 on dual NVIDIA Tesla C2070 cards. Added the time consuming on data read, transfer and storage part, the complete reconstruction process could be finished in less than 9 s. In particular, BPF-based ROI-reconstruction for cone beam Ortho-CT shows promising application prospect.
APA, Harvard, Vancouver, ISO, and other styles
49

Jenkins, Ron. "Profile Data Acquisition for the JCPDS?ICDD Database." Australian Journal of Physics 41, no. 2 (1988): 145. http://dx.doi.org/10.1071/ph880145.

Full text
Abstract:
The principal advantage offered by a fully digitised diffraction pattern is the retention of all features of the experimental pattern, including the line width and shape, the form and distribution of the background, etc. A file containing this type of reference data would in the future allow the use of techniques yet to be developed and of data processing, such as peak location, background subtraction and a2 stripping. The availability of digitised reference patterns would also allow the use of pattern-recognition techniques for qualitative phase analysis, as well as offering interesting possibilities for quantitative work. Until recently most commercially available automated powder diffractometers were limited to 10-20 Mbytes of disc storage and since a single fully digitised pattern requires about 10 kbytes, the provision of a file for thousands of digitised single phase reference patterns has not been possible. The recent advent of compact disc-read only memory (CD-ROM) systems providing in excess of 500 Mbytes now offers a low cost data storage capability. Plans are now in place for a new version of the Powder Diffraction File consisting of fully digitised patterns. Because of the need to maintain the database for years to come, it is most important that the stored data be as accurate and complete as possible.
APA, Harvard, Vancouver, ISO, and other styles
50

Chourasiya, Sanjay K., Anil S. Baghel, Arpit Verma, and Saket Kale. "A cross sectional study to assess the functioning of cold chain in a tribal district of central India." International Journal Of Community Medicine And Public Health 5, no. 11 (October 25, 2018): 4826. http://dx.doi.org/10.18203/2394-6040.ijcmph20184578.

Full text
Abstract:
Background: Immunization is one of the best efforts that India is putting forward currently to fight against various vaccine preventable diseases. Cold chain maintenance is always an issue. Therefore, cold chain maintenance is a pre-requisite in the correct delivery of immunization services.Methods: A cross sectional study was conducted among 18 cold chain points (CCPs) of Jhabua district using standard Government of India (GOI) structured questionnaires.Results: Out of 18 cold chain points only 5.55% had dry room for the storage of needle, syringes and other clerical material. A separate voltage stabilizer was attached each to deep freezer and ILR at only 22% of the health centers. Only 55.55% CCPs waste disposal pit constructed as per guideline. 94.45% cold chain handlers (CCHs) knew the definition of cold chain and correct temperature range at which vaccines to be stored, whereas only 33.33% CCHs knew about Shake test. 72.23% CCHs knew how to read vaccine vial monitor (VVM) and stages of VVM correctly. Knowledge of the CCHs regarding open vial policy was poor, with only 33.33% knowing exactly, the details of open vial policy.Conclusions: The quality of immunization programme can be increased by proper maintenance of cold chain and management of vaccine logistics at every designated cold chain point. There is need to improve the knowledge level of CCHs regarding cold chain maintenances and handling practices.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography