Статті в журналах з теми "Coded data storage"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Coded data storage.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Coded data storage".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Oggier, Frédérique, and Anwitaman Datta. "On Grid Quorums for Erasure Coded Data." Entropy 23, no. 2 (January 30, 2021): 177. http://dx.doi.org/10.3390/e23020177.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We consider the problem of designing grid quorum systems for maximum distance separable (MDS) erasure code based distributed storage systems. Quorums are used as a mechanism to maintain consistency in replication based storage systems, for which grid quorums have been shown to produce optimal load characteristics. This motivates the study of grid quorums in the context of erasure code based distributed storage systems. We show how grid quorums can be built for erasure coded data, investigate the load characteristics of these quorum systems, and demonstrate how sequential consistency is achieved even in the presence of storage node failures.
2

Ojima, Masahiro, Atsushi Saito, Toshimitsu Kaku, Masaru Ito, Yoshito Tsunoda, Shinji Takayama, and Yutaka Sugita. "Compact magnetooptical disk for coded data storage." Applied Optics 25, no. 4 (February 15, 1986): 483. http://dx.doi.org/10.1364/ao.25.000483.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ditlbacher, H., J. R. Krenn, B. Lamprecht, A. Leitner, and F. R. Aussenegg. "Spectrally coded optical data storage by metal nanoparticles." Optics Letters 25, no. 8 (April 15, 2000): 563. http://dx.doi.org/10.1364/ol.25.000563.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ditlbacher, Harald, Joachim Rudolf Krenn, Bernhard Lamprecht, Alfred Leitner, and Franz Rembert Aussenegg. "Metal Nanoparticles for Spectrally Coded Optical Data Storage." Optics and Photonics News 11, no. 12 (December 1, 2000): 43. http://dx.doi.org/10.1364/opn.11.12.000043.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ojima, M., Y. Tsunoda, T. Maeda, T. Kaku, A. Saito, S. Takayama, and Y. Sugita. "Compact Magneto-Optical Disk for Coded Data Storage." IEEE Translation Journal on Magnetics in Japan 1, no. 6 (September 1985): 698–99. http://dx.doi.org/10.1109/tjmj.1985.4548917.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Liu, Chengjian, Qiang Wang, Xiaowen Chu, Yiu-Wing Leung, and Hai Liu. "ESetStore: An Erasure-Coded Storage System With Fast Data Recovery." IEEE Transactions on Parallel and Distributed Systems 31, no. 9 (September 1, 2020): 2001–16. http://dx.doi.org/10.1109/tpds.2020.2983411.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Huang, Jianzhong, Panping Zhou, Xiao Qin, Yanqun Wang, and Changsheng Xie. "Optimizing Erasure-Coded Data Archival for Replica-Based Storage Clusters." Computer Journal 62, no. 2 (August 3, 2018): 247–62. http://dx.doi.org/10.1093/comjnl/bxy079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Xiang, Yu, Tian Lan, Vaneet Aggarwal, and Yih-Farn R. Chen. "Joint Latency and Cost Optimization for Erasure-Coded Data Center Storage." IEEE/ACM Transactions on Networking 24, no. 4 (August 2016): 2443–57. http://dx.doi.org/10.1109/tnet.2015.2466453.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tajeddine, Razane, Oliver W. Gnilke, and Salim El Rouayheb. "Private Information Retrieval From MDS Coded Data in Distributed Storage Systems." IEEE Transactions on Information Theory 64, no. 11 (November 2018): 7081–93. http://dx.doi.org/10.1109/tit.2018.2815607.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Xu, Liangliang, Min Lyu, Zhipeng Li, Yongkun Li, and Yinlong Xu. "Deterministic Data Distribution for Efficient Recovery in Erasure-Coded Storage Systems." IEEE Transactions on Parallel and Distributed Systems 31, no. 10 (October 1, 2020): 2248–62. http://dx.doi.org/10.1109/tpds.2020.2987837.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Song, Haiyang, Jianan Li, Dakui Lin, Hongjie Liu, Yongkun Lin, Jianying Hao, Kun Wang, Xiao Lin, and Xiaodi Tan. "Reducing the Crosstalk in Collinear Holographic Data Storage Systems Based on Random Position Orthogonal Phase-Coding Reference." Photonics 10, no. 10 (October 16, 2023): 1160. http://dx.doi.org/10.3390/photonics10101160.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Previous studies have shown that orthogonal phase-coding multiplexing performs well with low crosstalk in conventional off-axis systems. However, noticeable crosstalk occurs when applying the orthogonal phase-coding multiplexing to collinear holographic data storage systems. This paper demonstrates the crosstalk generation mechanism, features, and elimination methods. The crosstalk is caused by an inconsistency in the intensity reconstruction from the orthogonal phase-coded reference wave. The intensity fluctuation range was approximately 40%. Moreover, the more concentrated the distribution of pixels with the same phase key, the more pronounced the crosstalk. We propose an effective random orthogonal phase-coding reference wave method to reduce the crosstalk. The orthogonal phase-coded reference wave is randomly distributed over the entire reference wave. These disordered orthogonal phase-coded reference waves achieve consistent reconstruction intensities exhibiting the desired low-crosstalk storage effect. The average correlation coefficient between pages decreased by 73%, and the similarity decreased by 85%. This orthogonal phase-coding multiplexing method can be applied to encrypted holographic data storage. The low-crosstalk nature of this technique will make the encryption system more secure.
12

Gaeta, Rossano, and Marco Grangetto. "Malicious Node Identification in Coded Distributed Storage Systems under Pollution Attacks." ACM Transactions on Modeling and Performance Evaluation of Computing Systems 6, no. 3 (September 30, 2021): 1–27. http://dx.doi.org/10.1145/3491062.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In coding-based distributed storage systems (DSSs), a set of storage nodes (SNs) hold coded fragments of a data unit that collectively allow one to recover the original information. It is well known that data modification (a.k.a. pollution attack) is the Achilles’ heel of such coding systems; indeed, intentional modification of a single coded fragment has the potential to prevent the reconstruction of the original information because of error propagation induced by the decoding algorithm. The challenge we take in this work is to devise an algorithm to identify polluted coded fragments within the set encoding a data unit and to characterize its performance. To this end, we provide the following contributions: (i) We devise MIND (Malicious node IdeNtification in DSS), an algorithm that is general with respect to the encoding mechanism chosen for the DSS, it is able to cope with a heterogeneous allocation of coded fragments to SNs, and it is effective in successfully identifying polluted coded fragments in a low-redundancy scenario; (ii) We formally prove both MIND termination and correctness; (iii) We derive an accurate analytical characterization of MIND performance (hit probability and complexity); (iv) We develop a C++ prototype that implements MIND to validate the performance predictions of the analytical model. Finally, to show applicability of our work, we define performance and robustness metrics for an allocation of coded fragments to SNs and we apply the results of the analytical characterization of MIND performance to select coded fragments allocations yielding robustness to collusion as well as the highest probability to identify actual attackers.
13

Oggier, Frédérique, and Anwitaman Datta. "On repairing erasure coded data in an active-passive mixed storage network." International Journal of Information and Coding Theory 3, no. 1 (2015): 58. http://dx.doi.org/10.1504/ijicot.2015.068697.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Bao, Han, Yijie Wang, and Fangliang Xu. "Reducing network cost of data repair in erasure-coded cross-datacenter storage." Future Generation Computer Systems 102 (January 2020): 494–506. http://dx.doi.org/10.1016/j.future.2019.08.027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Xiao, Yifei, and Shijie Zhou. "Health Data Availability Protection: Delta-XOR-Relay Data Update in Erasure-Coded Cloud Storage Systems." Computer Modeling in Engineering & Sciences 135, no. 1 (2023): 169–85. http://dx.doi.org/10.32604/cmes.2022.021795.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Krishnan, Prasad, Lakshmi Natarajan, and V. Lalitha. "An Umbrella Converse for Data Exchange: Applied to Caching, Computing, and Shuffling." Entropy 23, no. 8 (July 30, 2021): 985. http://dx.doi.org/10.3390/e23080985.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The problem of data exchange between multiple nodes with storage and communication capabilities models several current multi-user communication problems like Coded Caching, Data Shuffling, Coded Computing, etc. The goal in such problems is to design communication schemes which accomplish the desired data exchange between the nodes with the optimal (minimum) amount of communication load. In this work, we present a converse to such a general data exchange problem. The expression of the converse depends only on the number of bits to be moved between different subsets of nodes, and does not assume anything further specific about the parameters in the problem. Specific problem formulations, such as those in Coded Caching, Coded Data Shuffling, and Coded Distributed Computing, can be seen as instances of this generic data exchange problem. Applying our generic converse, we can efficiently recover known important converses in these formulations. Further, for a generic coded caching problem with heterogeneous cache sizes at the clients with or without a central server, we obtain a new general converse, which subsumes some existing results. Finally we relate a “centralized” version of our bound to the known generalized independence number bound in index coding and discuss our bound’s tightness in this context.
17

Shao, Bilin, Dan Song, Genqing Bian, and Yu Zhao. "Rack Aware Data Placement for Network Consumption in Erasure-Coded Clustered Storage Systems." Information 9, no. 7 (June 21, 2018): 150. http://dx.doi.org/10.3390/info9070150.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Shen, Zhirong, Jiwu Shu, and Yingxun Fu. "Parity-Switched Data Placement: Optimizing Partial Stripe Writes in XOR-Coded Storage Systems." IEEE Transactions on Parallel and Distributed Systems 27, no. 11 (November 1, 2016): 3311–22. http://dx.doi.org/10.1109/tpds.2016.2525770.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Li, Xiaolu, Zuoru Yang, Jinhong Li, Runhui Li, Patrick P. C. Lee, Qun Huang, and Yuchong Hu. "Repair Pipelining for Erasure-coded Storage: Algorithms and Evaluation." ACM Transactions on Storage 17, no. 2 (May 28, 2021): 1–29. http://dx.doi.org/10.1145/3436890.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We propose repair pipelining , a technique that speeds up the repair performance in general erasure-coded storage. By carefully scheduling the repair of failed data in small-size units across storage nodes in a pipelined manner, repair pipelining reduces the single-block repair time to approximately the same as the normal read time for a single block in homogeneous environments. We further design different extensions of repair pipelining algorithms for heterogeneous environments and multi-block repair operations. We implement a repair pipelining prototype, called ECPipe , and integrate it as a middleware system into two versions of Hadoop Distributed File System (HDFS) (namely, HDFS-RAID and HDFS-3) as well as Quantcast File System. Experiments on a local testbed and Amazon EC2 show that repair pipelining significantly improves the performance of degraded reads and full-node recovery over existing repair techniques.
20

HAMADANI, AMBREEN, NAZIR A. GANAI, SHAH F. FAROOQ, and BASHARAT A. BHAT. "Big data management: from hard drives to DNA drives." Indian Journal of Animal Sciences 90, no. 2 (March 6, 2020): 134–40. http://dx.doi.org/10.56093/ijans.v90i2.98761.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Information Communication and Technology is transforming all aspects of modern life and in this digital era, there is a tremendous increase in the amount of data that is being generated every day. The current, conventional storage devices are unable to keep pace with this rapidly growing data. Thus, there is a need to look for alternative storage devices. DNA being exceptional in storage of biological information offers a promising storage capacity. With its unique abilities of dense storage and reliability, it may prove better than all conventional storage devices in near future. The nucleotide bases are present in DNA in a particular sequence representing the coded information. These are the equivalent of binary letters (0 &1). To store data in DNA, binary data is first converted to ternary or quaternary which is then translated into the nucleotide code comprising 4 nucleotide bases (A, C, G, T). A DNA strand is then synthesized as per the code developed. This may either be stored in pools or sequenced back. The nucleotide code is converted back into ternary and subsequently the binary code which is read just like digital data. DNA drives may have a wide variety of applications in information storage and DNA steganography.
21

Zhang, Xingjun, Ningjing Liang, Yunfei Liu, Changjiang Zhang, and Yang Li. "SA-RSR: a read-optimal data recovery strategy for XOR-coded distributed storage systems." Frontiers of Information Technology & Electronic Engineering 23, no. 6 (June 2022): 858–75. http://dx.doi.org/10.1631/fitee.2100242.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Shen, X. A., and R. Kachru. "Use of biphase-coded pulses for wideband data storage in time-domain optical memories." Applied Optics 32, no. 17 (June 10, 1993): 3149. http://dx.doi.org/10.1364/ao.32.003149.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Hu, Yupeng, Qian Li, Wei Xie, and Zhenyu Ye. "An Ant Colony Optimization Based Data Update Scheme for Distributed Erasure-Coded Storage Systems." IEEE Access 8 (2020): 118696–706. http://dx.doi.org/10.1109/access.2020.3004577.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Zhang, Shu Zhen, and Hai Long Song. "A Secret Sharing Algorithm Based on Regenerating Codes." Applied Mechanics and Materials 397-400 (September 2013): 2031–36. http://dx.doi.org/10.4028/www.scientific.net/amm.397-400.2031.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As a kind of special MDS erasure coding, regenerating codes are firstly used to solve the fault-tolerant problem in distributed storage systems. This paper constructs a new kind of secret sharing algorithm based on regenerating codes. The main process is that the original secret data is firstly stripped and coded with MDS erasure coding algorithm, then the vector components are periodically distributed to secret sharers in a certain order. The secret data can be rebuilt by decoding algorithm of regenerating codes if there are enough shares of the secret. Theoretical analysis shows that the algorithm is a safe threshold scheme. Because the operations are mainly linear on small finite field and its computation cost is low, so it is easy to realize.
25

Su, Yu, Shu Hong Wen, and Jian Ping Chai. "Embedded System Based Television Data Collection and Return Technology." Applied Mechanics and Materials 48-49 (February 2011): 496–501. http://dx.doi.org/10.4028/www.scientific.net/amm.48-49.496.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Television data collection and return technologies are one of key technologies in television secure broadcasting system, TV video content surveillance, TV program copyright protection, and client advertisement broadcasting. In china, the dominating methods of TV video content surveillance are manual tape recording and whole TV program Automatic Return. Manual method costs too much, whole TV program return method needs lots of net bandwidth and storage space. This paper proposes a new method of television data collection and return technology, video field is extracted from continuous video and coded at frequency of about one field per second, in other words, one field is extracted from continuous fifty fields of original video for PAL TV system, extracted frame can be coded by all means, for example JPEG2000, or intra mode code of H.264 or MPEG2. TV programs whose content and topic change most frequently are news and advertisement program, which may change topic in five to ten seconds, so extracted sequences hold the same topic and content and enough information with original video for TV program content surveillance application. The data quantity of extracted sequence is about 3 percent of the original video program, which will save large quantity of network bandwidth and storage space. One hardware implementation method of this technology based on embedded system is proposed, the TV Field Extractor, which circularly extracts images from target TV program, uses high-performance compression algorithm for image compression and stores the final output sequences of stationary images on the hard disk, or transmits these sequences to the monitoring center via network. This method evidently reduces device cost, network bandwidth and storage space, which can be widely adopted in TV program content surveillance and TV secure broadcasting system.
26

Pandit, Anubhav. "The Identical Data in Cloud Storage with ADJDUP Technique." December 2020 2, no. 4 (January 6, 2021): 214–18. http://dx.doi.org/10.36548/jucct.2020.4.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Data deduplication is necessary for corporations to minimize the hidden charges linked with backing up their data using Public Cloud platform. Incapable data storage on its own can become improvident, and such problem are enlarging in the Public Cloud at other and scattered satisfied confirmed storage structure are creating multiple clone of single account for collating or other purposes. Deduplication is friendly in cost shrinking by lengthening the benefit of a precise volume of data. Miserably, data duplicity having several safety constraints, so more than one encoding is appropriate to validate the details. There is a system for dynamic Information-Locking and Encoding with Convergent Encoding. In this Information-Locking and Encoding with Convergent Encoding, the data is coded first and then the cipher text is encoded once more. Chunk volume is used for deduplication to diminish disk capacity. The same segments would still be encrypted in the same cipher message. The format of the key neither be abbreviated from encrypted chunk data by the hacker. The comprehension is also guarded from the cloud server. The center of attention of this document is to reducing disk storage and provides protection for online cloud deduplication.
27

Kim, Yongok, Gyuyeol Kong, and Sooyong Choi. "Error Correcting Capable 2/4 Modulation Code Using Trellis Coded Modulation in Holographic Data Storage." Japanese Journal of Applied Physics 51, no. 8S2 (August 1, 2012): 08JD08. http://dx.doi.org/10.7567/jjap.51.08jd08.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Kim, Yongok, Gyuyeol Kong, and Sooyong Choi. "Error Correcting Capable 2/4 Modulation Code Using Trellis Coded Modulation in Holographic Data Storage." Japanese Journal of Applied Physics 51 (August 20, 2012): 08JD08. http://dx.doi.org/10.1143/jjap.51.08jd08.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Pei, Xiaoqiang, Yijie Wang, Xingkong Ma, and Fangliang Xu. "Efficient in-place update with grouped and pipelined data transmission in erasure-coded storage systems." Future Generation Computer Systems 69 (April 2017): 24–40. http://dx.doi.org/10.1016/j.future.2016.10.016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Chen, Junqi, Yong Wang, Miao Ye, Qinghao Zhang, and Wenlong Ke. "A Load-Aware Multistripe Concurrent Update Scheme in Erasure-Coded Storage System." Wireless Communications and Mobile Computing 2022 (May 19, 2022): 1–15. http://dx.doi.org/10.1155/2022/5392474.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Erasure coding has been widely deployed in today’s data centers for it can significantly reduce extra storage costs while providing high storage reliability. However, erasure coding introduced more network traffic and computational overhead in the data update process. How to improve the efficiency and mitigate the system imbalance during the update process in erasure coding is still a challenging problem. Recently, most of the existing update schemes of erasure codes only focused on the single stripe update scenario and ignored the heterogeneity of the node and network status which cannot sufficiently deal with the problems of low update efficiency and load imbalance caused by the multistripe concurrent update. To solve this problem, this paper proposes a Load-Aware Multistripe concurrent Update (LAMU) scheme in erasure-coded storage systems. Notably, LAMU introduces the Software-Defined Network (SDN) mechanism to measure the node loads and network status in real time. It selects nonduplicated nodes with better performance such as CPU utilization, remaining memory, and I/O load as the computing nodes for multiple update stripes. Then, a multiattribute decision-making method is used to schedule the network traffic generated in the update process. This mechanism can improve the transmission efficiency of update traffic and make LAMU adapt to the multistripe concurrent update scenarios in heterogeneous network environments. Finally, we designed a prototype system of multistripe concurrent updates. The extensive experimental results show that LAMU could improve the update efficiency and provide better system load-balancing performance.
31

Mahalingam, Hemalatha, Padmapriya Velupillai Meikandan, Karuppuswamy Thenmozhi, Kawthar Mostafa Moria, Chandrasekaran Lakshmi, Nithya Chidambaram, and Rengarajan Amirtharajan. "Neural Attractor-Based Adaptive Key Generator with DNA-Coded Security and Privacy Framework for Multimedia Data in Cloud Environments." Mathematics 11, no. 8 (April 7, 2023): 1769. http://dx.doi.org/10.3390/math11081769.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cloud services offer doctors and data scientists access to medical data from multiple locations using different devices (laptops, desktops, tablets, smartphones, etc.). Therefore, cyber threats to medical data at rest, in transit and when used by applications need to be pinpointed and prevented preemptively through a host of proven cryptographical solutions. The presented work integrates adaptive key generation, neural-based confusion and non-XOR, namely DNA diffusion, which offers a more extensive and unique key, adaptive confusion and unpredictable diffusion algorithm. Only authenticated users can store this encrypted image in cloud storage. The proposed security framework uses logistics, tent maps and adaptive key generation modules. The adaptive key is generated using a multilayer and nonlinear neural network from every input plain image. The Hopfield neural network (HNN) is a recurrent temporal network that updates learning with every plain image. We have taken Amazon Web Services (AWS) and Simple Storage Service (S3) to store encrypted images. Using benchmark evolution metrics, the ability of image encryption is validated against brute force and statistical attacks, and encryption quality analysis is also made. Thus, it is proved that the proposed scheme is well suited for hosting cloud storage for secure images.
32

Khan, Imran Ullah, M. A. Ansari, S. Hasan Saeed, and Kakul Khan. "Evaluation and Analysis of Rate Control Methods for H.264/AVC and MPEG-4 Video Codec." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 2 (April 1, 2018): 1273. http://dx.doi.org/10.11591/ijece.v8i2.pp1273-1280.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p class="Default">Audio, image and video signals produce a vast amount of data. The only solution of this problem is to compress data before storage and transmission. In general there is the three crucial terms as, Bit Rate Reduction, Fast Data Transfer and Reduction in Storage. Rate control is a vigorous factor in video coding. In video communications, rate control must ensure the coded bitstream can be transmitted effectively and make full use of the narrow bandwidth. There are various test models usually suggested by a standard during the development of video codes models in order to video coding which should be suffienciently be efficient based on H.264 at very low bit rate. These models are Test Model Number 5 (TMN5), Test Model Number 8 for H.263, and Verification Model 8 (VM8) for MPEG-4 and H.264 etc. In this work, Rate control analysis for H.264, MPEG-4 performed. For Rate control analysis test model verification model version 8.0 is adopted.</p>
33

Khan, Imran Ullah, M. A. Ansari, S. Hasan Saeed, and Kakul Khan. "Evaluation and Analysis of Rate Control Methods for H.264/AVC and MPEG-4 Video Codec." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 5 (October 1, 2018): 2788. http://dx.doi.org/10.11591/ijece.v8i5.pp2788-2794.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p class="Default"><span>Audio, image and video signals produce a vast amount of data. The only solution of this problem is to compress data before storage and transmission. In general there is the three crucial terms as, Bit Rate Reduction, Fast Data Transfer and Reduction in Storage. Rate control is a vigorous factor in video coding. In video communications, rate control must ensure the coded bitstream can be transmitted effectively and make full use of the narrow bandwidth. There are various test models usually suggested by a standard during the development of video codes models in order to video coding which should be suffienciently be efficient based on H.264 at very low bit rate. These models are Test Model Number 5 (TMN5), Test Model Number 8 for H.263, and Verification Model 8 (VM8) for MPEG-4 and H.264 etc. In this work, Rate control analysis for H.264, MPEG-4 performed. For Rate control analysis test model verification model version 8.0 is adopted.</span></p>
34

Shen, Zhirong, Patrick P. C. Lee, Jiwu Shu, and Wenzhong Guo. "Encoding-Aware Data Placement for Efficient Degraded Reads in XOR-Coded Storage Systems: Algorithms and Evaluation." IEEE Transactions on Parallel and Distributed Systems 29, no. 12 (December 1, 2018): 2757–70. http://dx.doi.org/10.1109/tpds.2018.2842210.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Yao, Jing, and Qi Liang Du. "A Simple Data Acquisition Software for Serial Devices Based on Excel VBA and its Application in Rotary Kilns." Advanced Materials Research 591-593 (November 2012): 1638–44. http://dx.doi.org/10.4028/www.scientific.net/amr.591-593.1638.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In order to collect field data of rotary kilns for detailed analysis before automatic control system were designed and implemented, a simple data acquisition software for serial devices on the basis of Excel VBA was introduced in this paper. The serial communication module was coded with MSComm control. Data were presented both in tables and curves by the charting tools of Excel. A modified 53H algorithm was presented for online abnormal value detection while several data filtering functions were provided. Organized workbooks and worksheets were used as storage structures rather than a database system. Its application in a Lithopone rotary kiln proved its effectiveness in data acquisition, storage, query and preprocessing. This kind of simple data acquisition software has advantages in economical efficiency and simplicity, and is suitable for data acquisition applications with low sampling rates.
36

Jagdish, Mukta, Amelec Viloria, Jesus Vargas, Omar Bonerge Pineda Lezama, and David Ovallos-Gazabon. "Modeling software architecture design on data storage security in cloud computing environments." Journal of Intelligent & Fuzzy Systems 39, no. 6 (December 4, 2020): 8557–64. http://dx.doi.org/10.3233/jifs-189172.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cloud-based computation is known as the source architecture of the upcoming generation of IT enterprise. In context to up-coming trade solutions, the Information Technology sections are established under logical, personnel, and physical control, it transfers application software and large database to appropriate data centers, where security and management of database with services are not trustworthy fully. So this process may face many challenges towards society and organizations and that not been well understood over a while duration. This becomes one of the major challenges days today. So in this research, it focuses on security-based data storage using cloud, which plays one of the important aspects bases on qualities of services. To assure user data correctness in the cloud system, a flexible and effective distributed technique with two different salient features was examined by utilizing the token called homomorphic with erasure-coded data for distributed verification, based on this technique it achieved error data localization and integration of storage correctness. Also, it identifies server misbehaving, efficient, and security-based dynamic operations on data blocking such as data append, delete, and update methods. Performance analysis and security show the proposed method is more effective resilient and efficient against Byzantine failure, even server colluding attacks and malicious data modification attacks.
37

Zhou, Anan, Benshun Yi, and Laigan Luo. "Tree-structured data placement scheme with cluster-aided top-down transmission in erasure-coded distributed storage systems." Computer Networks 204 (February 2022): 108714. http://dx.doi.org/10.1016/j.comnet.2021.108714.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Gopinath, R., and B. G. Geetha. "An E-learning System Based on Secure Data Storage Services in Cloud Computing." International Journal of Information Technology and Web Engineering 8, no. 2 (April 2013): 1–17. http://dx.doi.org/10.4018/jitwe.2013040101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abundant reasonable computers, web resources and education content are identified to transform educational usage on-demand in the field of cloud infrastructure. Therefore, there comes a necessity to redesign the educational system to meet the needs better. The appearance of cloud based services supports the creation of latest generation e-learning systems for storing multimedia data within the cloud; it draws attention for academia and research area, which may be able to use high quality of resources. Even though the merits of cloud service are more attractive, the physical possession of users data is under security risk with respect to data correctness. This poses many new security challenges which have not been well explored. This paper focuses mainly on distributed data storage security for e-learning system, which has always been an important aspect of quality service. To make sure the correctness of users data within the cloud, an adaptable and effective auditing mechanism hitches the challenges and distributes erasure-coded data for e-learning web application. This extensive analysis shows that the auditing result achieves quick data error correction and localization of servers for malicious data modification attacks.
39

Albina, Mayniar. "Perancangan Aplikasi Kompresi File Teks Menggunakan Algoritma Context Tree Weighting." Journal of Informatics Management and Information Technology 2, no. 3 (July 31, 2022): 78–82. http://dx.doi.org/10.47065/jimat.v2i3.149.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Currently the use of applications as information is increasingly being used. but there is a problem that is often encountered, namely the need for a large space as a storage medium. Data compression is the process of converting a set of data into a coded form to save the need for data storage. Context Tree Weighting algorithm is a data compression algorithm. To determine the results of the compression process, compression ratio and space saving media are used. The results will be known, the compression ratio and space saving. In accordance with the results of the trials conducted, it can be seen that the data that was originally large in size can be compressed very well with the implementation of compression text files.
40

Ellis, Katy, Chris Brew, George Patargias, Tim Adye, Rob Appleyard, Alastair Dewhurst, and Ian Johnson. "XRootD and Object Store: A new paradigm." EPJ Web of Conferences 245 (2020): 04006. http://dx.doi.org/10.1051/epjconf/202024504006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The XRootD software framework is essential for data access at WLCG sites. The WLCG community is exploring and expanding XRootD functionality. This presents a particular challenge at the RAL Tier-1 as the Echo storage service is a Ceph based Erasure Coded object store. External access to Echo uses gateway machines which run GridFTP and caching servers. Local jobs access Echo via caches on every worker node, but it is clear there are inefficiencies in the system. Remote jobs also access data via XRootD on Echo. For CMS jobs this is via the AAA service. ATLAS, who are consolidating their storage at fewer sites, are increasingly accessing job input data remotely. This paper describes the continuing work to optimise both local and remote data access by testing different caching methods.
41

Courtney, S. B., M. J. Tricard, and R. W. Hendricks. "PC-Based Management and Analysis of X-Ray Residual Stress Data." Advances in X-ray Analysis 36 (1992): 535–41. http://dx.doi.org/10.1154/s0376030800019169.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe authors have developed two independent software packages that store x-ray peak locations, integrated intensities and full-width half-maximum intensity data as a function of diffractometer tilt and orientation angle; this information is used to compute residual stress tensor values. Each program retrieves the fitted x-ray peak locations from a dBASE-compatible data set that is independent of both x-ray diffractometer and acquisition software. Machine-specific routines have been coded to transfer peak data and general diffraction setup information from several different x-ray acquisition platforms into this common format. The two database management programs provide stand-alone storage, retrieval, analysis and graphic output of data, and thus have become practical laboratory vehicles toward establishing a standard database format for storing x-ray strain measurements and the residual stress values calculated therefrom.
42

Wei, Boan, Jianqin Zhang, Chaonan Hu, and Zheng Wen. "A Clustering Visualization Method for Density Partitioning of Trajectory Big Data Based on Multi-Level Time Encoding." Applied Sciences 13, no. 19 (September 26, 2023): 10714. http://dx.doi.org/10.3390/app131910714.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The proliferation of the Internet and the widespread adoption of mobile devices have given rise to an immense volume of real-time trajectory big data. However, a single computer and conventional databases with limited scalability struggle to manage this data effectively. During the process of visual rendering, issues such as page stuttering and subpar visual outcomes often arise. This paper, founded on a distributed architecture, introduces a multi-level time encoding method using “minutes”, “hours”, and “days” as fundamental units, achieving a storage model for trajectory data at multi-scale time. Furthermore, building upon an improved DBSCAN clustering algorithm and integrating it with the K-means clustering algorithm, a novel density-based partitioning clustering algorithm has been introduced, which incorporates road coefficients to circumvent architectural obstacles, successfully resolving page stuttering issues and significantly enhancing the quality of visualization. The results indicate the following: (1) when data is extracted using the units of “minutes”, “hours”, and “days”, the retrieval efficiency of this model is 6.206 times, 12.475 times, and 18.634 times higher, respectively, compared to the retrieval efficiency of the original storage model. As the volume of retrieved data increases, the retrieval efficiency of the proposed storage model becomes increasingly superior to that of the original storage model. Under identical experimental conditions, this model’s retrieval efficiency also outperforms the space–time-coded storage model; (2) Under a consistent rendering level, the clustered trajectory data, when compared to the unclustered raw data, has shown a 40% improvement in the loading speed of generating heat maps. There is an absence of page stuttering. Furthermore, the heat kernel phenomenon in the heat map was also resolved while enhancing the visualization rendering speed.
43

Yang, Jian Xi, and Li Wen Zhang. "Optimization of Sensor Placement in SHM Based on the Dual Coded Genetic Algorithm." Applied Mechanics and Materials 151 (January 2012): 139–44. http://dx.doi.org/10.4028/www.scientific.net/amm.151.139.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper uses of the dual structure of coded genetic algorithm to optimize the sensor placement methods. The method using the optimal preservation strategy using the adaptive part of the cross, overcomes deficiencies of computer applying to the lengthy large-scale structure data, storage space, and to ensure that the optimal solution search. Finally, through the analysis of a continuous rigid frame bridge Project, proved that the method superior to the effective independent method in the search capability, computational efficiency and reliability, but still need to further improve the speed of convergence.
44

Vijayalakshmi, V., and K. Sharmila. "Secure Data Transactions based on Hash Coded Starvation Blockchain Security using Padded Ring Signature-ECC for Network of Things." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 1 (February 6, 2023): 53–61. http://dx.doi.org/10.17762/ijritcc.v11i1.5986.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Blockchain is one of the decentralized processes in a worldview that works with parallel and distributed ledger technology, the application process, and service-oriented design. To propose a Secure data Transaction based on Hash coded Starvation Blockchain security using Padded Ring signature-ECC for Network of Things. Initially, the crypto policy is authenticated based on the user-owner shared resource policy and access rights. This creates a Public blockchain environment with a P2P Blockchain network. The owner encrypts the data using optimized ECC through Hash-coded Starvation Blockchain security (HCSBS). This makes the encrypted block's provable partition chain Link (P2CL). The encrypted blocks are transmitted into the network of nodes monitored by NoT. During the data transmission, the Network of Things monitors the transaction flow to verify the authenticity over the network of nodes. The monitored data be securely stored in transaction Block storage with the hash-indexed block with chain ring policy (HICLP). This creates controller node aggregation over the transaction environment to securely transfer the data to the peer end. The User gets the access Key to decrypt the data with policy aggregated shared resource policy to access the data. The proposed system produces high security as well compared to the previous design.
45

Balaji, K., and S. S. Manikandasaran. "Data Security and Deduplication Framework for Securing and Deduplicating Users’ Data in Public and Private Cloud Environment." Journal of Scientific Research 14, no. 1 (January 1, 2022): 153–65. http://dx.doi.org/10.3329/jsr.v14i1.54063.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Maintaining the security of data stored in the public or private cloud is a more tedious task. The cloud is the only arrangement for storing enormous amounts of data, but there is a possibility of storing the same data more than once. The traditional security system generates different unreadable data for the same readable content of a file. Therefore, it is necessary to address data security of the cloud and duplication in cloud storage. This paper concentrates on developing a data security and deduplication framework with different security techniques and mechanisms to address the said difficulties in the cloud. The framework proposed in this paper focuses on reducing security vulnerability as well as data duplication. The paper describes the components used in the frameworks. The main research contribution of the framework is having enhanced the convergent encryption technique, key generation techniques, and deduplication mechanism for maintaining a single copy of data in the cloud. The proposed framework’s efficiency is measured by implementing the work by developing a cloud-based application that coded for all the procedures of the proposed framework and tested in the cloud environment.
46

Kuang, Fengtian, Bo Mi, Yang Li, Yuan Weng, and Shijie Wu. "Multiparty Homomorphic Machine Learning with Data Security and Model Preservation." Mathematical Problems in Engineering 2021 (January 11, 2021): 1–11. http://dx.doi.org/10.1155/2021/6615839.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the widespread application of machine learning (ML), data security has been a serious issue. To eliminate the conflict between data privacy and computability, homomorphism is extensively researched due to its capacity of performing operations over ciphertexts. Considering that the data provided by a single party are not always adequate to derive a competent model via machine learning, we proposed a privacy-preserving training method for the neural network over multiple data providers. Moreover, taking the trainer’s intellectual property into account, our scheme also achieved the goal of model parameter protection. Thanks to the hardness of the conjugate search problem (CSP) and discrete logarithm problem (DLP), the confidentiality of training data and system model can be reduced to well-studied security assumptions. In terms of efficiency, since all messages are coded as low-dimensional matrices, the expansion rates with regard to storage and computation overheads are linear compared to plaintext implementation without accuracy loss. In reality, our method can be transplanted to any machine learning system involving multiple parties due to its capacity of fully homomorphic computation.
47

Wang, Nan, Yuan Zhang, and Wei Ning. "Research on Compressive and Coding Algorithm of Bearing Vibration Signal in Wireless Transmission." Applied Mechanics and Materials 757 (April 2015): 195–99. http://dx.doi.org/10.4028/www.scientific.net/amm.757.195.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The resources of node including the storage capacity, processing capacity, the energy supply in wireless sensor networks (WSNs) are limited, and the energy of node is mainly consumed in process of data transmission. The vibration signal which changes quickly, and has large amount of data is usually used in bearing condition monitoring. So in the process of bearing condition monitoring using WSNs, if the bearing vibration signal is acquired and transmitted, then the node will be fault for running out of node resources earlier. In order to solve the problems above, and prolong the node life, a compressive and coding algorithm of bearing vibration signal is proposed. The algorithm is based on 5/3 integer lifting wavelet, and fuses the embedded zerotree wavelet and Huffman coding algorithm. The bearing vibration data is firstly processed by 5/3 wavelet, then the wavelet coefficients obtained is compressed and coded by zerotree wavelet, and finally the results above are further compressed and coded by Huffman algorithm to improve the compressing ratio. The algorithm is programmed and transplanted in DSP of node, the evaluative criteria of compressive ratio (CR) and root mean square error (RMSE) are adopted to verify the performance of algorithm. The experiment results show that the vibration signal is compressed and coded efficiently, the main frequency feature of vibration signal is retained although the compressing ratio is up to 9.5. The amount of vibration data in wireless transmission decreases greatly, and at the same time, the memory space of host computer is saved.
48

Venable, Richard M. "Data Transmission Through the Telephone Network: Protocols, Pitfalls, and Some Examples." Journal of AOAC INTERNATIONAL 69, no. 5 (September 1, 1986): 749–54. http://dx.doi.org/10.1093/jaoac/69.5.749.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Invariably, the situation arises where it is desirable to transfer data from one computer to another, especially from small laboratory systems, word processors, or home computers to large mainframe computers. In many of these cases, there are no common storage media; home computers do not have 9-track tape drives and large mainframes do not have 5¼ in. floppy disk drives. Transmission of data through the telephone network is a viable method for data transfer, which is paradoxically both easier than many believe and more difficult than some may claim. One of the keys to successful data transmission is an understanding of telecommunications protocols, i.e., the rules governing intersystem communication through the telephone network. Some of the most common protocols allow exchanging ASCII-coded data at either 300 or 1200 baud. A variety of computer systems can be used, including IBM and DEC mainframes, a Wang word processor, an IBM PC-compatible microcomputer, and the Atari 800 microcomputer. A specific example is the use of the Atari 800 as an APL terminal, complete with the custom character set, standard ASCII text, and data transfer.
49

Wojas, Gosia. "The Infallible and the Specter – Manifesting (artificial) subjectification in female sex robots." Matter: Journal of New Materialist Research 8 (July 31, 2023): 41–53. http://dx.doi.org/10.1344/jnmr.v8.43452.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The text outlines a recent artistic practice and theoretical research into a female AI sex doll object, its materiality, and signification. It is a culmination of a three-year study and intervention into the coded systems of control and sites of resistance that play out within the context of an artificial female body. Machine Learning (ML) algorithm is the mediator between sex robots, their users and cloud data storage, facilitating learning from their inter-actions. I examine this engagement and materiality of the sex robot through notions of feminist mimesis, substitute and simulation against emancipatory politics in object formation. I connect these ideas with theories of new materialism and recent scholarship on digital data science.
50

Arya, Mukesh Kumar, and Namit Gupta. "Adoptive Cloud Application in Semantic Web." International Journal of Advance Research and Innovation 2, no. 2 (2014): 48–51. http://dx.doi.org/10.51976/ijari.221407.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server (s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.

До бібліографії