To see the other types of publications on this topic, follow the link: Storage system bandwidth.

Journal articles on the topic 'Storage system bandwidth'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Storage system bandwidth.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

He, Qin Lu, Zhan Huai Li, Le Xiao Wang, Hui Feng Wang, and Jian Sun. "Performance Measurement Technique of Cloud Storage System." Advanced Materials Research 760-762 (September 2013): 1197–201. http://dx.doi.org/10.4028/www.scientific.net/amr.760-762.1197.

Full text
Abstract:
Researches on technologies about testing aggregate bandwidth of file systems in cloud storage systems. Through the memory file system, network file system, parallel file system theory analysis, according to the cloud storage system polymerization bandwidth and concept, developed to cloud storage environment file system polymerization bandwidth test software called FSPoly. In this paper, use FSpoly to luster file system testing, find reasonable test methods, and then evaluations latest development in cloud storage system file system performance by using FSPoly.
APA, Harvard, Vancouver, ISO, and other styles
2

Lv, Hushan, Yongrui Li, Yizhuang Xie, and Tingting Qiao. "An Efficient On-Chip Data Storage and Exchange Engine for Spaceborne SAR System." Remote Sensing 15, no. 11 (2023): 2885. http://dx.doi.org/10.3390/rs15112885.

Full text
Abstract:
Advancements in remote sensing technology and very-large-scale integrated circuit (VLSI) have significantly augmented the real-time processing capabilities of spaceborne synthetic aperture radar (SAR), thereby enhancing terrestrial observational capacities. However, the inefficiency of voluminous data storage and transfer inherent in conventional methods has emerged as a technical hindrance, curtailing real-time processing within SAR imaging systems. To address the constraints of a limited storage bandwidth and inefficient data transfer, this study introduces a three-dimensional cross-mapping approach premised on the equal subdivision of sub-matrices utilizing dual-channel DDR3. This method considerably augments storage access bandwidth and achieves equilibrium in two-dimensional data access. Concurrently, an on-chip data transfer approach predicated on a superscalar pipeline buffer is proposed, mitigating pipeline resource wastage, augmenting spatial parallelism, and enhancing data transfer efficiency. Building upon these concepts, a hardware architecture is designed for the efficient storage and transfer of SAR imaging system data, based on the superscalar pipeline. Ultimately, a data storage and transfer engine featuring register addressing access, configurable granularity, and state monitoring functionalities is realized. A comprehensive imaging processing experiment is conducted via a “CPU + FPGA” heterogeneous SAR imaging system. The empirical results reveal that the storage access bandwidth of the proposed superscalar pipeline-based SAR imaging system’s data efficient storage and transfer engine can attain up to 16.6 GB/s in the range direction and 20.0 GB/s in the azimuth direction. These findings underscore that the storage exchange engine boasts superior storage access bandwidth and heightened data storage transfer efficiency. This considerable enhancement in the processing performance of the entire “CPU + FPGA” heterogeneous SAR imaging system renders it suitable for application within spaceborne SAR real-time processing systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Honeyman, Janice C., Walter Huda, Meryll M. Frost, Carole K. Palmer, and Edward V. Staab. "Picture archiving and communication system bandwidth and storage requirements." Journal of Digital Imaging 9, no. 2 (1996): 60–66. http://dx.doi.org/10.1007/bf03168858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Qing, Shan Qing Hu, Yang Feng, and Teng Long. "The Design and Implementation of a High-Speed and Large-Capacity NAND Flash Storage System." Applied Mechanics and Materials 543-547 (March 2014): 568–71. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.568.

Full text
Abstract:
Now, the quality of higher speed and larger capacity are required to the real-time storage system. This paper designs a high-speed and large-capacity storage system which uses FPGA as the master of SOPC system controlling NAND Flash chips. This system puts forward an advanced storage structure which has several NAND Flashes with multi-buses, forming a parallel pipeline design. By using the key technologies of bad block management and the ECC algorithm, which greatly avoids the influence of the invalid block to the storage system and reduces the probability of error data as well. It can not only improve the storage bandwidth and capacity substantially, but also ensure the reliability of the storage system effectively. As a result, the storage system achieves the capacity of 1.5TB and the bandwidth of 1280MBps. Also, this system uses high-speed exchange interface to link to the external network, which achieve the real-time transmission and control of data, and make the storage system standard, universal, and scalable.
APA, Harvard, Vancouver, ISO, and other styles
5

Yin, Chao, Changsheng Xie, Jiguang Wan, Chih-Cheng Hung, Jinjiang Liu, and Yihua Lan. "BMCloud: Minimizing Repair Bandwidth and Maintenance Cost in Cloud Storage." Mathematical Problems in Engineering 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/756185.

Full text
Abstract:
To protect data in cloud storage, fault tolerance and efficient recovery become very important. Recent studies have developed numerous solutions based on erasure code techniques to solve this problem using functional repairs. However, there are two limitations to address. The first one is consistency since the Encoding Matrix (EM) is different among clouds. The other one is repairing bandwidth, which is a concern for most of us. We addressed these two problems from both theoretical and practical perspectives. We developed BMCloud, a new low repair bandwidth, low maintenance cost cloud storage system, which aims to reduce repair bandwidth and maintenance cost. The system employs both functional repair and exact repair while it inherits advantages from the both. We propose the JUDGE_STYLE algorithm, which can judge whether the system should adopt exact repair or functional repair. We implemented a networked storage system prototype and demonstrated our findings. Compared with existing solutions, BMCloud can be used in engineering to save repair bandwidth and degrade maintenance significantly.
APA, Harvard, Vancouver, ISO, and other styles
6

Kanrar, Soumen, and Niranjan Kumar Mandal. "Video Traffic Flow Analysis in Distributed System during Interactive Session." Advances in Multimedia 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/7829570.

Full text
Abstract:
Cost effective, smooth multimedia streaming to the remote customer through the distributed “video on demand” architecture is the most challenging research issue over the decade. The hierarchical system design is used for distributed network to satisfy more requesting users. The distributed hierarchical network system contains all the local and remote storage multimedia servers. The hierarchical network system is used to provide continuous availability of the data stream to the requesting customer. In this work, we propose a novel data stream that handles the methodology for reducing the connection failure and smooth multimedia stream delivery to the remote customer. The proposed session based single-user bandwidth requirement model presents the bandwidth requirement for any interactive session like pause, move slowly, rewind, skip some of the frame, and move fast with some constant number of frames. The proposed session based optimum storage finding algorithm reduces the search hop count towards the remote storage-data server. The modeling and simulation result shows the better impact over the distributed system architecture. This work presents the novel bandwidth requirement model at the interactive session and gives the trade-off in communication and storage costs for different system resource configurations.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Shiqiu, Fangwei Ye, and Qihui Wu. "Clustered Distributed Data Storage Repairing Multiple Failures." Entropy 27, no. 3 (2025): 313. https://doi.org/10.3390/e27030313.

Full text
Abstract:
A clustered distributed storage system (DSS), also called a rack-aware storage system, is a distributed storage system in which the nodes are grouped into several clusters. The communication between two clusters may be restricted by their connectivity; that is to say, the communication cost between nodes differs depending on their location. As such, when repairing a failed node, downloading data from nodes that are in the same cluster is much cheaper and more efficient than downloading data from nodes in another cluster. In this article, we consider a scenario in which the failed nodes only download data from nodes in the same cluster, which is an extreme and important case that leverages the fact that the intra-cluster bandwidth is much cheaper than the cross-cluster repair bandwidth. Also, we study the problem of repairing multiple failures in this article, which allows for collaboration within the same cluster, i.e., failed nodes in the same cluster can exchange data with each other. We derive the trade-off between the storage and repair bandwidth for the clustered DSSs and provide explicit code constructions achieving two extreme points in the trade-off, namely the minimum storage clustered collaborative repair (MSCCR) point and the minimum bandwidth clustered collaborative repair (MBCCR) point, respectively.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yun Peng. "A Design of High Speed Data Acquisition and Storage System." Applied Mechanics and Materials 367 (August 2013): 541–43. http://dx.doi.org/10.4028/www.scientific.net/amm.367.541.

Full text
Abstract:
This article focuses on research and implementation of a kind of solid storage system that is based on NAND flash which can store the data with high speed and huge capacity. A design with quad 1.25Gsps ADC and flash storage array with 1TB is demonstrated in the paper. The design is applied widely in many fields such as radar, communication and speech recognition. The detail of hardware development is also introduced in the thesis. In addition, a method is discussed to approve the reading and writing bandwidth by parallel operations on multiple pieces of flash. By using the method, the data bandwidth is arrived 6GB/S.
APA, Harvard, Vancouver, ISO, and other styles
9

Yolchuyev, Agil, and Janos Levendovszky. "Data Chunks Placement Optimization for Hybrid Storage Systems." Future Internet 13, no. 7 (2021): 181. http://dx.doi.org/10.3390/fi13070181.

Full text
Abstract:
“Hybrid Cloud Storage” (HCS) is a widely adopted framework that combines the functionality of public and private cloud storage models to provide storage services. This kind of storage is especially ideal for organizations that seek to reduce the cost of their storage infrastructure with the use of “Public Cloud Storage” as a backend to on-premises primary storage. Despite the higher performance, the hybrid cloud has latency issues, related to the distance and bandwidth of the public storage, which may cause a significant drop in the performance of the storage systems during data transfer. This issue can become a major problem when one or more private storage nodes fail. In this paper, we propose a new framework for optimizing the data uploading process that is currently used with hybrid cloud storage systems. The optimization is concerned with spreading the data over the multiple storages in the HCS system according to some predefined objective functions. Furthermore, we also used Network Coding technics for minimizing data transfer latency between the receiver (private storages) and transmitter nodes.
APA, Harvard, Vancouver, ISO, and other styles
10

Niu, Xin, and Jingjing Jiang. "Single node repair algorithm for a multimedia cloud storage system based on network coding." Journal of High Speed Networks 27, no. 3 (2021): 205–14. http://dx.doi.org/10.3233/jhs-210661.

Full text
Abstract:
Multimedia is inconvenient to use, difficult to maintain, and redundant in data storage. In order to solve the above problems and apply cloud storage to the integration of university teaching resources, this paper designs a virtualized cloud storage platform for university multimedia classrooms. The platform has many advantages, such as reducing the initial investment in multimedia classrooms, simplifying management tasks, making maximum use of actual resources and easy access to resources. Experiments and analysis show the feasibility and effectiveness of the platform. Aiming at the problems of the single-node repair algorithm of the existing multimedia cloud storage system, the limited domain is large, the codec complexity is high, the disk I/O (Input/Output) cost is high, the storage overhead and the repair bandwidth are unbalanced, and a network coding-based approach is proposed. Multimedia cloud storage. System single node repair algorithm. The algorithm stores the grouped multimedia file data in groups in the system, and performs XOR (exclusive OR) on the data in the group on the GF(2) finite field. When some nodes fail, the new node only needs to be connected. Two to three non-faulty nodes in the same group can accurately repair the data in the failed node. Theoretical analysis and simulation results show that the algorithm can reduce the complexity and repair of the codec, and reduce the disk I/O overhead. In this case, the storage cost of the algorithm is consistent with the storage cost based on the minimum storage regeneration code algorithm, and the repair bandwidth cost is close to the minimum bandwidth regeneration code algorithm.
APA, Harvard, Vancouver, ISO, and other styles
11

KUMAR, Mr M. A. R., and Mrs SRILATHA PULI. "FINDING DATA DEDUPLICATION USING CLOUD." YMER Digital 21, no. 05 (2022): 136–42. http://dx.doi.org/10.37896/ymer21.05/17.

Full text
Abstract:
Data grows at the emotional rate of 50% per time, and 75% of the digital world is a copy1 Although keeping multiple clones of data is necessary to guarantee their availability and high continuity and the quantum of data redundancy is inordinate. By keeping a single dupe of repeated data, data deduplication is one of the most promising results to reduce the storage costs, and improve users experience by saving network bandwidth and reducing provisory time. However, this result must now solve many security issues to be fully satisfying. In this project we target the attacks from malicious clients that are grounded on the manipulation of data identifiers and those based on backup time and network traffic observation. Our system provides global storage space savings, per-customer bandwidth network savings between clients and deduplication proxies, and saving global network bandwidth between deduplication proxies and the storage server. The evaluation of our result compared to a classic system shows that the overhead introduced by our scheme is mostly due to data encryption which is necessary to ensure data confidentiality. Data deduplication allows the cloud users to manage their cloud storage space for storing effectively by avoiding storage of repeated data’s and save bandwidth. Here we use the Cloud Me for the data storage. For data confidentiality the data are stored in an encrypted form using Advanced Encryption Standard (AES) algorithm.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Lin. "Optimization of network performance of distributed storage system for biomechanical big data based on cloud computing." Molecular & Cellular Biomechanics 22, no. 5 (2025): 1743. https://doi.org/10.62617/mcb1743.

Full text
Abstract:
This study proposes a network performance optimization strategy based on cloud computing to address the stringent demands of biomechanical big data on the efficiency of distributed storage systems. Biomechanical data, including motion capture, force plate measurements, and tissue strain analysis, involve large-scale, high-frequency, and heterogeneous datasets that necessitate efficient storage and real-time processing. By optimizing data transmission paths, designing an efficient caching mechanism, dynamically allocating bandwidth resources, and implementing network congestion control, the system significantly enhances throughput, reduces latency, and improves bandwidth utilization and data transmission reliability.
APA, Harvard, Vancouver, ISO, and other styles
13

Bhattacharya, Hindol, Samiran Chattopadhyay, Matangini Chattopadhyay, and Avishek Banerjee. "Storage and Bandwidth Optimized Reliable Distributed Data Allocation Algorithm." International Journal of Ambient Computing and Intelligence 10, no. 1 (2019): 78–95. http://dx.doi.org/10.4018/ijaci.2019010105.

Full text
Abstract:
Distributed storage allocation problems are an important optimization problem in reliable distributed storage, which aims to minimize storage cost while maximizing error recovery probability by optimal storage of data in distributed storage nodes. A key characteristic of distributed storage is that data is stored in remote servers across a network. Thus, network resources especially communication links are an expensive and non-trivial resource which should be optimized as well. In this article, the authors present a simulation-based study of the network characteristics of a distributed storage network in the light of several allocation patterns. By varying the allocation patterns, the authors have demonstrated the interdependence between network bandwidth, defined in terms of link capacity and allocation pattern using network throughput as a metric. Motivated by observing the importance of network resource as an important cost metric, the authors have formalized an optimization problem that jointly minimizes both the storage cost and the cost of network resources. A hybrid meta heuristic algorithm is employed that solves this optimization problem by allocating data in a distributed storage system. Experimental results validate the efficacy of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
14

Shi, Jiang Yi, Kun Chen, Kang Li, and Zhi Xiong Di. "A Storage Scheme for Fast Packet Buffers in Network Processor." Advanced Materials Research 403-408 (November 2011): 2628–31. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2628.

Full text
Abstract:
With the Internet services increased explosively, the requirement for network bandwidth is rigorous. Upon the extraordinary development of process capability, Memory access control has become a key factor that impacts the performance of network processor. The paper proposed a storage management for fast packet buffers in network processor, which can enhance the utilization of bandwidth. Experiment results shows that this approach improved the rates of accessing to memory system in network processor remarkably.
APA, Harvard, Vancouver, ISO, and other styles
15

Dr.Trupti, Kaushiram Wable. "Design and Implementation of Collaborative Cloud-Edge System Using Raspberry Pi for Video Surveillance System with AIoT to Analyse Effective Performance Parameters of Network." Research and Applications: Emerging Technologies 6, no. 2 (2024): 30–35. https://doi.org/10.5281/zenodo.11607879.

Full text
Abstract:
<em>The video surveillance can avoid many crimes as well as it will help to reduce crime rate in society as well we can save many lives. But currently implemented IoT system having various limitations like insufficient storage capacity and inadequate processing of information. Thus we can integrate traditional IoT system with Artificial Intelligence (AI) models to improve storage capacity &amp; processing called as Artificial Intelligence of Things (AIoT). This system mainly focuses on performance parameter of video surveillance system the parameter consist of Response Latency Time, Network Bandwidth &amp; Storage on server. In proposed system divided in two part, First part include Edge node implemented with Raspberry Pi as IoT system which having video input then it perform image processing &amp; store output on edge node, second part include cloud node which is train with AI model as AI system to extract image and analyzed performance of system. So Cloud-Edge Collaborative system refers as Artificial Intelligence of Things (AIoT). In this research I conclude comparative study of traditional Cloud Computing System with Collaborative Cloud-Edge Computing system which shows that, the Response Latency Time improve by 5 times, Network Bandwidth improve by 10 times and storage capacity improve by 5 times of traditional Edge Computing System.</em>
APA, Harvard, Vancouver, ISO, and other styles
16

Faruqui, Nuruzzaman, Sandesh Achar, Sandeepkumar Racherla, et al. "Cloud IaaS Optimization Using Machine Vision at the IoT Edge and the Grid Sensing Algorithm." Sensors 24, no. 21 (2024): 6895. http://dx.doi.org/10.3390/s24216895.

Full text
Abstract:
Security grids consisting of High-Definition (HD) Internet of Things (IoT) cameras are gaining popularity for organizational perimeter surveillance and security monitoring. Transmitting HD video data to cloud infrastructure requires high bandwidth and more storage space than text, audio, and image data. It becomes more challenging for large-scale organizations with massive security grids to minimize cloud network bandwidth and storage costs. This paper presents an application of Machine Vision at the IoT Edge (Mez) technology in association with a novel Grid Sensing (GRS) algorithm to optimize cloud Infrastructure as a Service (IaaS) resource allocation, leading to cost minimization. Experimental results demonstrated a 31.29% reduction in bandwidth and a 22.43% reduction in storage requirements. The Mez technology offers a network latency feedback module with knobs for transforming video frames to adjust to the latency sensitivity. The association of the GRS algorithm introduces its compatibility in the IoT camera-driven security grid by automatically ranking the existing bandwidth requirements by different IoT nodes. As a result, the proposed system minimizes the entire grid’s throughput, contributing to significant cloud resource optimization.
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Ningjiang, Weitao Liu, Wenjuan Pu, Yifei Liu, and Qingwei Zhong. "SDNC-Repair: A Cooperative Data Repair Strategy Based on Erasure Code for Software-Defined Storage." Sensors 23, no. 13 (2023): 5809. http://dx.doi.org/10.3390/s23135809.

Full text
Abstract:
Erasure-code-based storage systems suffer from problems such as long repair time and low I/O performance, resulting in high repair costs. For many years, researchers have focused on reducing the cost of repairing erasure-code-based storage systems. In this study, we discuss the demerits of node selecting, data transferring and data repair in erasure-code-based storage systems. Based on the network topology and node structure, we propose SDNC-Repair, a cooperative data repair strategy based on erasure code for SDS (Software Defined Storage), and describe its framework. Then, we propose a data source selection algorithm that senses the available network bandwidth between nodes and a data flow scheduling algorithm in SDNC-Repair. Additionally, we propose a data repair method based on node collaboration and data aggregation. Experiments illustrate that the proposed method has better repair performance under different data granularities. Compared to the conventional repair method, although the SDNC-Repair is more constrained by the cross-rack bandwidth, it improves system throughput effectively and significantly reduces data repair time in scenarios where multiple nodes fail and bandwidth is limited.
APA, Harvard, Vancouver, ISO, and other styles
18

Su, Yu, Shu Hong Wen, and Jian Ping Chai. "Embedded System Based Television Data Collection and Return Technology." Applied Mechanics and Materials 48-49 (February 2011): 496–501. http://dx.doi.org/10.4028/www.scientific.net/amm.48-49.496.

Full text
Abstract:
Television data collection and return technologies are one of key technologies in television secure broadcasting system, TV video content surveillance, TV program copyright protection, and client advertisement broadcasting. In china, the dominating methods of TV video content surveillance are manual tape recording and whole TV program Automatic Return. Manual method costs too much, whole TV program return method needs lots of net bandwidth and storage space. This paper proposes a new method of television data collection and return technology, video field is extracted from continuous video and coded at frequency of about one field per second, in other words, one field is extracted from continuous fifty fields of original video for PAL TV system, extracted frame can be coded by all means, for example JPEG2000, or intra mode code of H.264 or MPEG2. TV programs whose content and topic change most frequently are news and advertisement program, which may change topic in five to ten seconds, so extracted sequences hold the same topic and content and enough information with original video for TV program content surveillance application. The data quantity of extracted sequence is about 3 percent of the original video program, which will save large quantity of network bandwidth and storage space. One hardware implementation method of this technology based on embedded system is proposed, the TV Field Extractor, which circularly extracts images from target TV program, uses high-performance compression algorithm for image compression and stores the final output sequences of stationary images on the hard disk, or transmits these sequences to the monitoring center via network. This method evidently reduces device cost, network bandwidth and storage space, which can be widely adopted in TV program content surveillance and TV secure broadcasting system.
APA, Harvard, Vancouver, ISO, and other styles
19

Xue, Nian, and Le Chang. "Design Tradeoffs in Applying CAS to File System Backed by Amazon-S3." Applied Mechanics and Materials 543-547 (March 2014): 3423–26. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.3423.

Full text
Abstract:
This paper first introduces the basic concepts of related technologies and cloud storage, and then analyzes the architecture and components of cloud storage platform which is compatible with Amazon S3. And particularly shows the implementation of CAS (Content Addressable Storage) which also needs to communicate with S3 server. S3fs (Amazon S3-based file system, which called S3fs) is developed through the frame of FUSE based on Amazon S3. The communication between CAS and S3 is transparent for users. S3fs also take the management of chunked file indexed by hash value, which could reduce the consumption of bandwidth caused by synchronization.
APA, Harvard, Vancouver, ISO, and other styles
20

Durner, Dominik, Badrish Chandramouli, and Yinan Li. "Crystal." Proceedings of the VLDB Endowment 14, no. 11 (2021): 2432–44. http://dx.doi.org/10.14778/3476249.3476292.

Full text
Abstract:
Cloud analytical databases employ a disaggregated storage model, where the elastic compute layer accesses data persisted on remote cloud storage in block-oriented columnar formats. Given the high latency and low bandwidth to remote storage and the limited size of fast local storage, caching data at the compute node is important and has resulted in a renewed interest in caching for analytics. Today, each DBMS builds its own caching solution, usually based on file-or block-level LRU. In this paper, we advocate a new architecture of a smart cache storage system called Crystal , that is co-located with compute. Crystal's clients are DBMS-specific "data sources" with push-down predicates. Similar in spirit to a DBMS, Crystal incorporates query processing and optimization components focusing on efficient caching and serving of single-table hyper-rectangles called regions. Results show that Crystal, with a small DBMS-specific data source connector, can significantly improve query latencies on unmodified Spark and Greenplum while also saving on bandwidth from remote storage.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Ling, and Shuai Shuai Zhu. "Data Dispersal for Large Storage Systems." Applied Mechanics and Materials 644-650 (September 2014): 2334–37. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.2334.

Full text
Abstract:
Data dispersal mechanism is a group of basic working protocols for oceans of data processing and it plays an important role in enhancing system error correction and avoiding storage device failure. The data recovery availability of the client is the main design goal of data dispersal algorithms. The system storage and computing costs are also important factors for a data dispersal algorithm. In this paper, we discussed how to disperse data with the special properties as well as with acceptable system costs. We developed a universal approach to measure storage node availability based on state variables. After analyzed the existing data processing algorithms, we presented a new hybrid data dispersal algorithm using the above approach in detail evaluation. Two redundancy policies are adapted to storage node with different node availability. According to our analysis, this algorithm has its advantages in system storing and communication bandwidth costs.
APA, Harvard, Vancouver, ISO, and other styles
22

Rasina Begum, B., and P. Chithra. "Improving Security on Cloud Based Deduplication System." Asian Journal of Computer Science and Technology 7, S1 (2018): 16–19. http://dx.doi.org/10.51983/ajcst-2018.7.s1.1813.

Full text
Abstract:
Cloud computing provides a scalable platform for large amount of data and processes that work on various applications and services by means of on-demand service. The storage services offered by clouds have become a new profit growth by providing a comparable cheaper, scalable, location-independent platform for managing users’ data. The client uses the cloud storage and enjoys the high end applications and services from a shared group of configurable computing resources using cloud services. It reduces the difficulty of local data storage and maintenance. But it gives severe security issues toward users’ outsourced data. Data Redundancy promotes the data reliability in Cloud Storage. At the same time, it increases storage space, Bandwidth and Security threats due to some server vulnerability. Data Deduplication helps to improve storage utilization. Backup is also less which means less Hardware and Backup media. But it has lots of security issues. Data reliability is a very risky issue in a Deduplication storage system because there is single copy for each file stored in the server which is shared by all the data owners. If such a shared file/chunk was missing, large amount of data becomes unreachable. The main aim of this work is to implement Deduplication System without sacrificing Security in cloud storage. It combines both Deduplication and convergent key cryptography with reduced overhead.
APA, Harvard, Vancouver, ISO, and other styles
23

R. Vignesh, R. Vignesh, and J. Preethi R. Vignesh. "Secure Data Deduplication System with Efficient and Reliable Multi-Key Management in Cloud Storage." 網際網路技術學刊 23, no. 4 (2022): 811–25. http://dx.doi.org/10.53106/160792642022072304016.

Full text
Abstract:
&lt;p&gt;The revolutionary growth in the processing and storage mechanisms over the Internet has given the enhancement to inexpensive and strong computing properties. Cloud computing is a rising technology, which offers the data storage facility also application accessing facility in online environment. This system stands countless opportunities also challenges. In that, security of data and the increasing similar data in cloud (duplication) are very important issues to be addressed. So, Deduplication method is developed to reduce the similar data that is present in the storage system. In this paper, a novel technique is proposed to remove the duplicate data from cloud also help to save the bandwidth access and storage space. The experimental results demonstrate that the proposed system provide the more security for data in cloud storage and also overcomes the main drawbacks of the existing systems. In one-server storage and distributed storage systems, we have created a solution which provides data security and space efficacy. The chunk data generates encryption keys consistently; the same chunk is therefore always encrypted with the same chip text. In addition, the keys cannot be derived from the chunk data encrypted. Because the information to be accessed and decrypted by each user is encrypted by using a key known to the user alone, even a complete system breach cannot expose which chunks are utilised by which users.&lt;/p&gt; &lt;p&gt;&amp;nbsp;&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
24

Wagh, Sameer, Paul Cuff, and Prateek Mittal. "Differentially Private Oblivious RAM." Proceedings on Privacy Enhancing Technologies 2018, no. 4 (2018): 64–84. http://dx.doi.org/10.1515/popets-2018-0032.

Full text
Abstract:
Abstract In this work, we investigate if statistical privacy can enhance the performance of ORAM mechanisms while providing rigorous privacy guarantees. We propose a formal and rigorous framework for developing ORAM protocols with statistical security viz., a differentially private ORAM (DP-ORAM). We present Root ORAM, a family of DP-ORAMs that provide a tunable, multi-dimensional trade-off between the desired bandwidth overhead, local storage and system security. We theoretically analyze Root ORAM to quantify both its security and performance. We experimentally demonstrate the benefits of Root ORAM and find that (1) Root ORAM can reduce local storage overhead by about 2× for a reasonable values of privacy budget, significantly enhancing performance in memory limited platforms such as trusted execution environments, and (2) Root ORAM allows tunable trade-offs between bandwidth, storage, and privacy, reducing bandwidth overheads by up to 2×-10× (at the cost of increased storage/statistical privacy), enabling significant reductions in ORAM access latencies for cloud environments. We also analyze the privacy guarantees of DP-ORAMs through the lens of information theoretic metrics of Shannon entropy and Min-entropy [16]. Finally, Root ORAM is ideally suited for applications which have a similar access pattern, and we showcase its utility via the application of Private Information Retrieval.
APA, Harvard, Vancouver, ISO, and other styles
25

Vasam, Varshini*1 &. B.Sridhara Murthy2. "EFFICIENT DATA STORAGE MECHANISM IN CLOUD COMPUTING." GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES 5, no. 6 (2018): 16–21. https://doi.org/10.5281/zenodo.1262287.

Full text
Abstract:
As data graduallygrows within data storage areas, the cloud storage systems nonstopface challenges in saving storage capacity and providing capabilities necessary to move big data within an acceptable time frame. In this paper, we propose the Boafft, a cloud storage system with distributed deduplication. The Boafft achieves scalable throughput and capacity using multiple data servers to de- duplicate data in parallel, with a minimal loss of deduplication ratio. Initially, the Boafft uses an effective data routing procedure based on data similarity that decrease the network burdenby quickly identifying the storage location. Secondly, the Boafft maintains an inmemory similarity indexing in each data server that helps avoid a large number of random disk reads and writes, which in turn accelerates local data deduplication. Thirdly, the Boafft build hot fingerprint cache in each data server based on accessoccurrence, so as to improve the data deduplication ratio. Our comparative analysis with EMC&rsquo;s statefull routing algorithm reveals that the Boafft can provide a comparatively high deduplication ratio with a low network bandwidth overhead. Moreover, the Boafft makes best usage of the storage area, with higher read/write bandwidth and good load balance.
APA, Harvard, Vancouver, ISO, and other styles
26

Shahverdi, Masood, Michael S. Mazzola, Quintin Grice, and Matthew Doude. "Bandwidth-Based Control Strategy for a Series HEV With Light Energy Storage System." IEEE Transactions on Vehicular Technology 66, no. 2 (2017): 1040–52. http://dx.doi.org/10.1109/tvt.2016.2559949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Dang, Shoujiang, and Rui Han. "An In-Network Cooperative Storage Schema Based on Neighbor Offloading in a Programmable Data Plane." Future Internet 14, no. 1 (2021): 18. http://dx.doi.org/10.3390/fi14010018.

Full text
Abstract:
In scientific domains such as high-energy particle physics and genomics, the quantity of high-speed data traffic generated may far exceed the storage throughput and be unable to be in time stored in the current node. Cooperating and utilizing multiple storage nodes on the forwarding path provides an opportunity for high-speed data storage. This paper proposes the use of flow entries to dynamically split traffic among selected neighbor nodes to sequentially amortize excess traffic. We propose a neighbor selection mechanism based on the Local Name Mapping and Resolution System, in which the node weights are computed by combing the link bandwidth and node storage capability, and determining whether to split traffic by comparing normalized weight values with thresholds. To dynamically offload traffic among multiple targets, the cooperative storage strategy implemented in a programmable data plane is presented using the relative weights and ID suffix matching. Evaluation shows that our proposed schema is more efficient compared with end-to-end transmission and ECMP in terms of bandwidth usage and transfer time, and is beneficial in big science.
APA, Harvard, Vancouver, ISO, and other styles
28

Dimililer, Kamil. "Backpropagation Neural Network Implementation for Medical Image Compression." Journal of Applied Mathematics 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/453098.

Full text
Abstract:
Medical images require compression, before transmission or storage, due to constrained bandwidth and storage capacity. An ideal image compression system must yield high-quality compressed image with high compression ratio. In this paper, Haar wavelet transform and discrete cosine transform are considered and a neural network is trained to relate the X-ray image contents to their ideal compression method and their optimum compression ratio.
APA, Harvard, Vancouver, ISO, and other styles
29

Riadi, Imam, Tohari Ahmad, Riyanarto Sarno, Purwono Purwono, and Alfian Ma'arif. "Developing Data Integrity in an Electronic Health Record System using Blockchain and InterPlanetary File System (Case Study: COVID-19 Data)." Emerging Science Journal 4 (February 1, 2022): 190–206. http://dx.doi.org/10.28991/esj-2021-sp1-013.

Full text
Abstract:
The misuse of health data stored in the Electronic Health Record (EHR) system can be uncontrolled. For example, mishandling of privacy and data security related to Corona Virus Disease-19 (COVID-19), containing patient diagnosis and vaccine certificate in Indonesia. We propose a system framework design by utilizing the InterPlanetary File System (IPFS) and Blockchain technology to overcome this problem. The IPFS environment supports a large data storage with a distributed network powered by Ethereum blockchain. The combination of this technology allows data stored in the EHR to be secure and available at any time. All data are secured with a blockchain cryptographic algorithm and can only be accessed using a user's private key. System testing evaluates the mechanism and process of storing and accessing data from 346 computers connected to the IPFS network and Blockchain by considering several parameters, such as gas unit, CPU load, network latency, and bandwidth used. The obtained results show that 135205 gas units are used in each transaction based on the tests. The average execution speed ranges from 12.98 to 14.08 GHz, 26 KB/s is used for incoming, and 4 KB/s is for outgoing bandwidth. Our contribution is in designing a blockchain-based decentralized EHR system by maximizing the use of private keys as an access right to maintain the integrity of COVID-19 diagnosis and certificate data. We also provide alternative storage using a distributed IPFS to maintain data availability at all times as a solution to the problem of traditional cloud storage, which often ignores data availability. Doi: 10.28991/esj-2021-SP1-013 Full Text: PDF
APA, Harvard, Vancouver, ISO, and other styles
30

Hema S and Dr.Kangaiammal A. "A Secure Method for Managing Data in Cloud Storage using Deduplication and Enhanced Fuzzy Based Intrusion Detection Framework." November 2020 6, no. 11 (2020): 165–73. http://dx.doi.org/10.46501/ijmtst061131.

Full text
Abstract:
Cloud services increase data availability so as to offer flawless service to the client. Because of increasing data availability, more redundancies and more memory space are required to store such data. Cloud computing requires essential storage and efficient protection for all types of data. With the amount of data produced seeing an exponential increase with time, storing the replicated data contents is inevitable. Hence, using storage optimization approaches becomes an important pre-requisite for enormous storage domains like cloud storage. Data deduplication is the technique which compresses the data by eliminating the replicated copies of similar data and it is widely utilized in cloud storage to conserve bandwidth and minimize the storage space. Despite the data deduplication eliminates data redundancy and data replication; it likewise presents significant data privacy and security problems for the end-user. Considering this, in this work, a novel security-based deduplication model is proposed to reduce a hash value of a given file size and provide additional security for cloud storage. In proposed method the hash value of a given file is reduced employing Distributed Storage Hash Algorithm (DSHA) and to provide security the file is encrypted by using an Improved Blowfish Encryption Algorithm (IBEA). This framework also proposes the enhanced fuzzy based intrusion detection system (EFIDS) by defining rules for the major attacks, thereby alert the system automatically. Finally the combination of data exclusion and security encryption technique allows cloud users to effectively manage their cloud storage by avoiding repeated data encroachment. It also saves bandwidth and alerts the system from attackers. The results of experiments reveal that the discussed algorithm yields improved throughput and bytes saved per second in comparison with other chunking algorithms.
APA, Harvard, Vancouver, ISO, and other styles
31

Ye, Miao, Ruoyu Wei, Wei Guo, Qiuxiang Jiang, Hongbing Qiu, and Yong Wang. "A New Method for Reconstructing Data on a Single Failure Node in the Distributed Storage System Based on the MSR Code." Wireless Communications and Mobile Computing 2021 (March 31, 2021): 1–14. http://dx.doi.org/10.1155/2021/5574255.

Full text
Abstract:
As a storage method for a distributed storage system, an erasure code can save storage space and repair the data of failed nodes. However, most studies that discuss the repair of fault nodes in the erasure code mode only focus on the condition that the bandwidth of heterogeneous links restricts the repair rate but ignore the condition that the storage node is heterogeneous, the cost of repair traffic in the repair process, and the influence of the failure of secondary nodes on the repair process. An optimal repair strategy based on the minimum storage regenerative (MSR) code and a hybrid genetic algorithm is proposed for single-node fault scenarios to solve the above problems. In this work, the single-node data repair problem is modeled as an optimization problem of an optimal Steiner tree with constraints considering heterogeneous link bandwidth and heterogeneous node processing capacity and takes repair traffic and repair delay as optimization objectives. After that, a hybrid genetic algorithm is designed to solve the problem. The experimental results show that under the same scales used in the MSR code cases, our approach has good robustness and its repair delay decreases by 10% and 55% compared with the conventional tree repair topology and star repair topology, respectively; the repair flow increases by 10% compared with the star topology, and the flow rate of the conventional tree repair topology decreases by 40%.
APA, Harvard, Vancouver, ISO, and other styles
32

DENG, YUHUI, FRANK WANG, JIANGLING ZHANG, DAN FENG, FANG WANG, and HONG JIANG. "PUSH THE BOTTLENECK OF STREAMING MEDIA SYSTEM FROM STREAMING MEDIA SERVER TO NETWORK." International Journal of Image and Graphics 05, no. 04 (2005): 859–69. http://dx.doi.org/10.1142/s0219467805002038.

Full text
Abstract:
Streaming media is pervasive on the Internet now and it continues to grow rapidly. Most streaming media systems have adopted a typical system architecture in which the storage devices are attached privately to the streaming media server. Unfortunately, with the steady growth of Internet subscribers, the streaming media server quickly becomes a system bottleneck. This paper proposes an innovative, high performance streaming media system architecture (HPSMS) based on the logical separation in the streaming media transport protocol. The architecture avoids expensive store-and-forward data copying between the streaming media server and storage devices. The most salient feature of HPSMS is that the architecture eliminates the streaming media server bottleneck, while dynamically increasing system bandwidth with the expansion of storage system capacity. The system performance of the proposed HPSMS is evaluated through a prototype implementation.
APA, Harvard, Vancouver, ISO, and other styles
33

Lai, Longbin, Linfeng Shen, Yanfei Zheng, Kefei Chen, and Jing Zhang. "Analysis for REPERA." International Journal of Cloud Applications and Computing 2, no. 1 (2012): 71–82. http://dx.doi.org/10.4018/ijcac.2012010105.

Full text
Abstract:
Distributed systems, especially those providing cloud services, endeavor to construct sufficiently reliable storage in order to attract more customers. Generally, pure replication and erasure code are widely adopted in distributed systems to guarantee reliable data storage, yet both of them contain some deficiencies. Pure replication consumes too much extra storage and bandwidth, while erasure code seems not so high-efficiency and only suitable for read-only context. The authors proposed REPERA as a hybrid mechanism combining pure replication and erasure code to leverage their advantages and mitigate their shortages. This paper qualitatively compares fault-resilient distributed architectures built with pure replication, erasure code and REPERA. The authors show that systems employing REPERA share with erasure-resilient systems a higher availability and more durable storage with similar space and bandwidth consumption when compared with replicated systems. The authors show that systems employing REPERA, on one hand, obtain higher availability while comparing to erasure-resilient systems, on the other hand, benefit from more durable storage while comparing to replicated systems. Furthermore, since REPERA was developed under the open platform, REPERA, the authors prepare an experiment to evaluate the performance of REPERA by comparing with the original system.
APA, Harvard, Vancouver, ISO, and other styles
34

Shahverdi, Masood, Michael Mazzola, Matthew Doude, Quintin Grice, Jim Gafford, and Nicolas Sockeel. "An Experiment-Based Methodology for Evaluating the Impacts of Full Bandwidth Load on the Hybrid Energy Storage System for Electrified Vehicles." Sci 1, no. 1 (2019): 26. http://dx.doi.org/10.3390/sci1010026.

Full text
Abstract:
In Electrified Vehicles, the cost, efficiency, and durability of electrified vehicles are dependent on the energy storage system (ESS) components, configuration and its performance. This paper, pursuing a minimal size tactic, describes a methodology for quantitatively and qualitatively investigating the impacts of a full bandwidth load on the ESS in the HEV. However, the methodology can be extended to other electrified vehicles. The full bandwidth load, up to the operating frequency of the electric motor drive (20 kHz), is empirically measured which includes a frequency range beyond the usually covered frequency range by published standard drive cycles (up to 0.5 Hz). The higher frequency band is shown to be more efficiently covered by a Hybrid Energy Storage System (HESS) which in this paper is defined as combination of a high energy density battery, an Ultra-Capacitor (UC), an electrolytic capacitor, and a film capacitor. In this paper, the harmonic and dc currents and voltages are measured through two precision methods and then the results are used to discuss about overall HEV efficiency and durability. More importantly, the impact of the addition of high-band energy storage devices in reduction of power loss during transient events is disclosed through precision measurement based methodology.
APA, Harvard, Vancouver, ISO, and other styles
35

Agarwal, Sachin, Jatinder Pal Singh, and Shruti Dube. "Analysis and Implementation of Gossip-Based P2P Streaming with Distributed Incentive Mechanisms for Peer Cooperation." Advances in Multimedia 2007 (2007): 1–12. http://dx.doi.org/10.1155/2007/84150.

Full text
Abstract:
Peer-to-peer (P2P) systems are becoming a popular means of streaming audio and video content but they are prone to bandwidth starvation if selfish peers do not contribute bandwidth to other peers. We prove that an incentive mechanism can be created for a live streaming P2P protocol while preserving the asymptotic properties of randomized gossip-based streaming. In order to show the utility of our result, we adapt a distributed incentive scheme from P2P file storage literature to the live streaming scenario. We provide simulation results that confirm the ability to achieve a constant download rate (in time, per peer) that is needed for streaming applications on peers. The incentive scheme fairly differentiates peers' download rates according to the amount of useful bandwidth they contribute back to the P2P system, thus creating a powerful quality-of-service incentive for peers to contribute bandwidth to other peers. We propose a functional architecture and protocol format for a gossip-based streaming system with incentive mechanisms, and present evaluation data from a real implementation of a P2P streaming application.
APA, Harvard, Vancouver, ISO, and other styles
36

UDOMARIYASAP, P., S. NOPPANAKEEPONG, S. MITATHA, and P. P. YUPAPIN. "THz LIGHT PULSE GENERATION AND STORAGE WITHIN AN EMBEDDED OPTICAL WAVEGUIDE SYSTEM." Journal of Nonlinear Optical Physics & Materials 19, no. 02 (2010): 303–10. http://dx.doi.org/10.1142/s0218863510005194.

Full text
Abstract:
We propose the interesting results of high frequency generation method, which is required to use in the THz regime. A generated system consists of two micro and a nano ring that can be integrated into a signal system which can be employed to generate the large bandwidth by a Gaussian pulse propagating within the ring resonator system. The selected signals can be filtered by using the optical add/drop filter. By controlling the ring parameters, the appropriate output power can be obtained, which can be modified to be suitable in either imaging or communication applications. Moreover, the very wide band of wavelength can be generated and controlled for various applications.
APA, Harvard, Vancouver, ISO, and other styles
37

Durner, Dominik, Viktor Leis, and Thomas Neumann. "Exploiting Cloud Object Storage for High-Performance Analytics." Proceedings of the VLDB Endowment 16, no. 11 (2023): 2769–82. http://dx.doi.org/10.14778/3611479.3611486.

Full text
Abstract:
Elasticity of compute and storage is crucial for analytical cloud database systems. All cloud vendors provide disaggregated object stores, which can be used as storage backend for analytical query engines. Until recently, local storage was unavoidable to process large tables efficiently due to the bandwidth limitations of the network infrastructure in public clouds. However, the gap between remote network and local NVMe bandwidth is closing, making cloud storage more attractive. This paper presents a blueprint for performing efficient analytics directly on cloud object stores. We derive cost- and performance-optimal retrieval configurations for cloud object stores with the first in-depth study of this foundational service in the context of analytical query processing. For achieving high retrieval performance, we present AnyBlob , a novel download manager for query engines that optimizes throughput while minimizing CPU usage. We discuss the integration of high-performance data retrieval in query engines and demonstrate it by incorporating AnyBlob in our database system Umbra. Our experiments show that even without caching, Umbra with integrated AnyBlob achieves similar performance to state-of-the-art cloud data warehouses that cache data on local SSDs while improving resource elasticity.
APA, Harvard, Vancouver, ISO, and other styles
38

Huang, Ting. "Design of Mobile Electronic Micro-Payment System." Applied Mechanics and Materials 530-531 (February 2014): 859–64. http://dx.doi.org/10.4028/www.scientific.net/amm.530-531.859.

Full text
Abstract:
Mobile electronic micro-payment system uses mobile terminals for electronic payments which can circulate in multiple banks and cannot limit from the bank that issues the e-cash. This paper researches electronic micro-payment on the agreement of opening account, the withdrawal agreement, the pay agreement, the deposit agreement, the update protocol of the e-cash based on elliptic curve cryptography (ECC). The design of this system is more suitable for mobile micro-payment terminals with limit of calculation capacity, storage, network bandwidth and power supply, which satisfies the needs of the day-to-day transactions.
APA, Harvard, Vancouver, ISO, and other styles
39

Urmila, Prajapati, and Pahadiya Ms.Pallavi. "Study and Implementation of Image Compression using Different Wavelets." Journal of Controller and Converters 3, no. 3 (2018): 17–22. https://doi.org/10.5281/zenodo.1565376.

Full text
Abstract:
<em>Transmission applications are utilized in broadcast,remote detecting by means of satellite, flying machine, radar, video chatting, PC interchanges, copy transmission, and so on. The cost of storage and transmission of images across a network is important now-a-days, as a huge collection of images in the form of databases are available on the Net. The reduction in file size is necessary to meet the bandwidth requirements for many transmission systems, and to reduce storage costs.</em>
APA, Harvard, Vancouver, ISO, and other styles
40

Munsayac, Fracisco Emmanuel Jr ,. III, Nilo Bugtai, Renann Baldovino, Noelle Marie Espiritu, and Lowell Nathaniel Singson. "Development of an Arduino-based Control and Sensor System for a Robotic Laparoscopic Surgical Unit." Recoletos Multidisciplinary Research Journal 12, no. 1 (2024): 125–44. http://dx.doi.org/10.32871/rmrj2412.01.10.

Full text
Abstract:
The LAPARA System, a Philippine-made robotic surgical system, tested its control system in this paper. Three types of tests are then done to the system: PID Optimization Test, Position Checking, and Data Transfer Rate and Memory Bandwidth Testing. Results from the PID resulted in the values 2.32 for the P, 0.4 for the I, and 1.5 for the D to be chosen to ensure the system runs smoothly. The system was also able to run properly during Position Checking, though movement in the Pitch and Yaw required refinement due to the constraints. Also, the data transfer rate for the PC to Arduino Due connection yielded a 128kb/s speed, slower than the 480 Mbps rating, while the memory bandwidth testing yielded results that allowed for storage of 23,040 32_bit values. In conclusion, although minor adjustments were needed to refine the system, the LAPARA system was able to perform as intended.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Qing Qing, Qing Yu Wang, Gao Feng Zhang, Yi Zhang, and Lan Xu Wu. "A Power Quality Monitoring Data Management Scheme Based on Distributed Database." Advanced Materials Research 732-733 (August 2013): 1410–14. http://dx.doi.org/10.4028/www.scientific.net/amr.732-733.1410.

Full text
Abstract:
Currently the massive power quality monitoring data are stored in the centralized database of the monitoring master station, the problems such as large storage space; low query retrieval speed; low reliability and poor scalability will be caused. This paper proposes a data management scheme for massive power quality monitoring data based on the distributed database system. The monitoring data of different power quality indexes are stored in the distributed servers of the existing monitoring sub-stations; the server of monitoring master station is used for storing data characteristics value and data indexes, it is also used for unified management of distributed database system. The scheme takes full advantage of each servers storage space and network bandwidth, and saves the storage space and improves the access efficiency.
APA, Harvard, Vancouver, ISO, and other styles
42

Darwanto, Agus, and Mohammad Alfin Khoiri. "IMPLEMENTASI SAMBA PRIMARY DOMAIN CONTROLLER, MANAJEMEN BANDWIDTH, DAN PEMBATASAN AKSES WEBSITE UNTUK MENINGKATKAN EFEKTIFITAS KEGIATAN PEMBELAJARAN DI LABORATORIUM TEKNIK KOMPUTER & JARINGAN SMKN 1 DLANGGU." KONVERGENSI 17, no. 2 (2022): 89–101. http://dx.doi.org/10.30996/konv.v17i2.5478.

Full text
Abstract:
ABSTRACTThere are no shared storage that is centralized in the server requires students to store their files into extended storage such as a flash drive. Primary Domain Controller as centralized storage for each client with a login system for each user provides an alternative centralized personal storage with the resources provided by the server. With a centralized server, the user makes it easy to store files without fear of being swapped by other users. While the absence of website limitations that cannot be accessed at certain hours such as productive learning hours can reduce student learning focus because the bandwidth management manages the distribution of internet connection speeds evenly on each client. By using layer 7 protocol method and Address List as a means of limiting access to certain websites as well as Ubuntu Linux as a centralized server facility with the addition of hotspot management as a means of wireless internet connection for students with a personal authentication system without using a conventional password results in a local server domain controller which cannot be accessed by parties outside the network itself. The distribution of internet bandwidth for students is a maximum of 1024kbps and has an average speed of 700-1024kbps. While blocking certain websites such as YouTube, Facebook, Instagram, etc., will be limited by the layer 7 protocol method which results in blocked sites. The logging of IP addresses from online games is done automatically and dynamically by the Address List method with the result of all games that are disabled cannot be accessed from either a wired or wireless connection.&#x0D; Keywords: samba, domain controller, mikrotik, bandwidth management&#x0D; ABSTRAKTidak adanya sharing storage yang terpusat pada server mengharuskan siswa untuk menyimpan semua file,seperti tugas ataupun referensi pembelajaran yang telah dikerjakan ke dalam extend storage seperti flashdisk. Primary Domain Controller sebagai penyimpanan terpusat setiap client dengan sistem login pada masing-masing pengguna memberikan alternatif penyimpanan pribadi secara terpusat dengan sumberdaya yang telah disediakan oleh server, dengan adanya server terpusat user dimudahkan dalam hal menyimpan file, baik itu tugas ataupun file project tanpa takut file tertukar dengan user lain, sedangkan tidak adanya limitasi website yang tidak boleh diakses pada jam-jam tertentu seperti jam produktif belajar dapat membuat fokus belajar siswa berkurang,dengan adanya management bandwidth melakukan manajemen pembagian kecepatan koneksi internet secara merata pada masing-masing client. dengan menggunakan metode layer7 protocol, dan Address List sebagai sarana limitasi akses ke website tertentu serta linux Ubuntu sebagai sarana server terpusat dan juga ditambahkannya manajemen hotspot sebagai sarana koneksi internet wireless untuk siswa dengan sistem autentikasi pribadi tanpa menggunakan password konvensional didapatkan hasil server domain controller bersifat lokal dan tidak bisa diakses oleh pihak diluar jaringan,pembagian bandwidth internet untuk siswa sebesar maksimal 1024Kbps mempunyai kecepatan rata-rata 700-1024Kbps, sedangkan untuk pemblokiran website tertentu seperti youtube,facebook,instagram,dan lain-lain dibatasi pada metode layer7 protocol dengan hasil semua situs yang diblokir tidak dapat dibuka, untuk pencatatan alamat ip dari game online dilakukan secara otomatis dan secara dinamis dengan metode Address List dengan hasil semua game yang tercacat tidak dapat diakses baik dari koneksi kabel ataupun non-kabel.&#x0D; Kata Kunci: Samba, Domain Controller, Mikrotik, Bandwidth Management
APA, Harvard, Vancouver, ISO, and other styles
43

Kakade, Manoj Subhash, Anupama Karuppiah, Mayank Mathur, et al. "Multitask Scheduling on Distributed Cloudlet System Built Using SoCs." Journal of Systemics, Cybernetics and Informatics 21, no. 1 (2023): 61–72. http://dx.doi.org/10.54808/jsci.21.01.61.

Full text
Abstract:
With the emergence of IoT, new computing paradigms have also emerged. Initial IoT systems had all the computing happening on the cloud. With the emergence of Industry 4.0 and IoT being the major building block, clouds are not the only solution for data storage and analytics. Cloudlet, Fog Computing, Edge Computing, and Dew Computing models are now available, providing similar capabilities as the cloud. The term cloudlet was introduced first in 2011, but research in this area has picked up only over the past five years. Unlike clouds, which are built with powerful server-class machines and GPUs, cloudlets are usually made using simpler devices such as SoCs. In this paper, we propose a complete novel distributed architecture for cloudlets, and we are also proposing algorithms for data storage and task allocation across various nodes in the cloudlet. This cloudlet system was built using Qualcomm Snapdragon 410c. We have analyzed the architecture and the algorithm for varying workloads, bandwidth and data storage. The primary aim of the algorithm and the architecture is to ensure uniform processing and data loads across the nodes of the system.
APA, Harvard, Vancouver, ISO, and other styles
44

Salamat, Sahand, Hui Zhang, Yang Seok Ki, and Tajana Rosing. "NASCENT2: Generic Near-Storage Sort Accelerator for Data Analytics on SmartSSD." ACM Transactions on Reconfigurable Technology and Systems 15, no. 2 (2022): 1–29. http://dx.doi.org/10.1145/3472769.

Full text
Abstract:
As the size of data generated every day grows dramatically, the computational bottleneck of computer systems has shifted toward storage devices. The interface between the storage and the computational platforms has become the main limitation due to its limited bandwidth, which does not scale when the number of storage devices increases. Interconnect networks do not provide simultaneous access to all storage devices and thus limit the performance of the system when executing independent operations on different storage devices. Offloading the computations to the storage devices eliminates the burden of data transfer from the interconnects. Near-storage computing offloads a portion of computations to the storage devices to accelerate big data applications. In this article, we propose a generic near-storage sort accelerator for data analytics, NASCENT2, which utilizes Samsung SmartSSD, an NVMe flash drive with an on-board FPGA chip that processes data in situ. NASCENT2 consists of dictionary decoder, sort, and shuffle FPGA-based accelerators to support sorting database tables based on a key column with any arbitrary data type. It exploits data partitioning applied by data processing management systems, such as SparkSQL, to breakdown the sort operations on colossal tables to multiple sort operations on smaller tables. NASCENT2 generic sort provides 2 × speedup and 15.2 × energy efficiency improvement as compared to the CPU baseline. It moreover considers the specifications of the SmartSSD (e.g., the FPGA resources, interconnect network, and solid-state drive bandwidth) to increase the scalability of computer systems as the number of storage devices increases. With 12 SmartSSDs, NASCENT2 is 9.9× (137.2 ×) faster and 7.3 × (119.2 ×) more energy efficient in sorting the largest tables of TPCC and TPCH benchmarks than the FPGA (CPU) baseline.
APA, Harvard, Vancouver, ISO, and other styles
45

Varsha., A. R* Manohara. P. H. "DEPENDABILITY VALIDATING AND RENEWING OF CODE USING INSPECTIONING SYSTEM." Global Journal of Engineering Science and Research Management 3, no. 6 (2016): 37–41. https://doi.org/10.5281/zenodo.55309.

Full text
Abstract:
Data dependability maintenance is the major objective in cloud storage. It includes inspection using TPA for unauthorized access. To give security for the outsourced data in cloud storage against various problems like corruptions, and providing data dependability becomes difficult. One of the important issue is the fault tolerance to protect the data in the cloud. Now a days, renewing codes got importance because of their lower repair bandwidth. For renewing coded data the remote checking methods only provide private inspectioning, for this we require&nbsp; data holders to always to be in online and handle inspectioning, and also repairing, which is difficult at sometimes. In this paper we are proposing a scheme called public inspectioning for renewing code based cloud storage. To obtain solution for renewing problem of failed authenticators in the absence of data holders, we make a proxy, which is privileged to renew the authenticators, in the traditional public inspectioning system model. We also design a novel public verifiable authenticator, which is made by some keys. Thus, this scheme can almost release data holders from online burden. We also randomize the encode coefficients with a pseudorandom function to preserve data privacy. Extensive security analysis shows this system is secure and provable under random oracle model. Experimental evaluation model indicates that this system is highly efficient and can be feasibly integrated into the renewing-code cloud based storage.
APA, Harvard, Vancouver, ISO, and other styles
46

Suthir, S. Dr.S.Janakiraman. "A Survey of Fast File Sharing System in Network." INTERNATIONAL JOURNAL OF ENGINEERING DEVELOPMENT AND RESEARCH Volume 5 | Issue 2 | May 2017, Volume 5 | Issue 2 | May 2017 (2017): Page Number(s) — 1298–1304. https://doi.org/10.5281/zenodo.583721.

Full text
Abstract:
The Phenomenal survey for file sharing in network is done based on some of the characteristics like speed and security. Perhaps, our protection loom not only considers the shrinking the size of network traffic that wishes to survive traced but also investigate to get better sustainability of the structure. File deduplication is a procedure for abolish replica photocopy of file, and has been broadly used in database storage to shrink database free space and upload bandwidth. On the other hand, there is simply single copy for every record stock up in database archive was used by large number of clients. As an outcome, deduplication method progresses storage consumption while dropping consistency. Protection Examination exhibit that our deduplication method is safe in conditions of the characterization mentioned in the planned security representation. For future achievement, all the characteristics of Peer-to-Peer file sharing with Speed, Security and Deduplication together enhance its exhibitions and procedures distributed in the literature.
APA, Harvard, Vancouver, ISO, and other styles
47

Siew, Chengxi, and Pankaj Kumar. "CitySAC: A Query-Able CityGML Compression System." Smart Cities 2, no. 1 (2019): 106–17. http://dx.doi.org/10.3390/smartcities2010008.

Full text
Abstract:
Spatial Data Infrastructures (SDIs) are frequently used to exchange 2D &amp; 3D data, in areas such as city planning, disaster management, urban navigation and many more. City Geography Mark-up Language (CityGML), an Open Geospatial Consortium (OGC) standard has been developed for the storage and exchange of 3D city models. Due to its encoding in XML based format, the data transfer efficiency is reduced which leads to data storage issues. The use of CityGML for analysis purposes is limited due to its inefficiency in terms of file size and bandwidth consumption. This paper introduces XML based compression technique and elaborates how data efficiency can be achieved with the use of schema-aware encoder. We particularly present CityGML Schema Aware Compressor (CitySAC), which is a compression approach for CityGML data transaction within SDI framework. Our test results show that the encoding system produces smaller file size in comparison with existing state-of-the-art compression methods. The encoding process significantly reduces the file size up to 7–10% of the original data.
APA, Harvard, Vancouver, ISO, and other styles
48

S. Srinivasan, Et al. "Effective Cost Reduction Usage of Infrastructure in Small Scale Sectors using Cloud Storage and Internet of Things." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (2023): 2441–47. http://dx.doi.org/10.17762/ijritcc.v11i9.9311.

Full text
Abstract:
Presently many small-scale information technology based private sectors are scare due to spending of high cost for maintaining of infrastructures and resources. In order to avoid to pay more money on maintaining infrastructures, there is a need of an effective price reduction method for the usage of infrastructures in various computer-based private sectors which has been achieved through integration of different online and offline cloud storages along with internet of things sensors. This method delivers huge range of infrastructures, resources and services through online and offline platforms through on-demand basics of user request. This system disables or switch off unnecessary services or resources when it is unused by people in the private sectors which has been controlled through various types of sensors. This integrated cloud storage and IoT method verifies user verification and validation along with storing all information about status of infrastructures in offline and online cloud storage environment. This paper deals dynamic group audit regulation through k-means cluster technique with cryptographic algorithm. The alert messaging and alarm indication system has been enforced in this system. Combining of internet of things and cloud storage techniques which guarantees reduction of cost for usage of infrastructures and also assures privacy, security on quality of service with maintaining of better information transmission. This high skilled method preserves the internet of things with cloud storage environment with better performance evaluation in terms of bandwidth usage and computational and communication cost.
APA, Harvard, Vancouver, ISO, and other styles
49

Wicaksono, Airlangga Baihaqi, Rendy Munadi, and Sussi Sussi. "Cloud server design for heavy workload gaming computing with Google cloud platform." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 2 (2023): 2197. http://dx.doi.org/10.11591/ijece.v13i2.pp2197-2205.

Full text
Abstract:
Cloud servers are generally used for data storage and remote office activities, but it can be applied for gaming purposes, where cloud servers can be paired with virtual machines and gaming platform that can be accessed by users via an internet connection. This makes the device used by user no longer needs to process resources because the workload is carried out by virtual machines on cloud server. The author designs a cloud gaming system using Google cloud platform as a cloud server and parsec as an optimizer that is attached to a virtual machine for game computing purposes. Author takes measurements of the cloud gaming system using 2 test games varying from low to middle specifications. Resource testing on central processing unit (CPU) and random-access memory (RAM) usage on the user side is below 40% when running game 1 and below 44% when running game 2, while on the system it reaches a capacity above 40% for CPU and RAM and 99% maximum on graphics processing unit (GPU). Quality of service testing of the system is carried out at bandwidths of 5, 10, and 30 Mbps with a minimum bandwidth of 10 Mbps. In general, there are a little difference that occurred between test game and different bandwidths.
APA, Harvard, Vancouver, ISO, and other styles
50

Airlangga, Baihaqi Wicaksono, Munadi Rendy, and Sussi. "Cloud server design for heavy workload gaming computing with Google cloud platform." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 2 (2023): 2197–205. https://doi.org/10.11591/ijece.v13i2.pp2197-2205.

Full text
Abstract:
Cloud servers are generally used for data storage and remote office activities, but it can be applied for gaming purposes, where cloud servers can be paired with virtual machines and gaming platform that can be accessed by users via an internet connection. This makes the device used by user no longer needs to process resources because the workload is carried out by virtual machines on cloud server. The author designs a cloud gaming system using Google cloud platform as a cloud server and parsec as an optimizer that is attached to a virtual machine for game computing purposes. Author takes measurements of the cloud gaming system using 2 test games varying from low to middle specifications. Resource testing on central processing unit (CPU) and random-access memory (RAM) usage on the user side is below 40% when running game 1 and below 44% when running game 2, while on the system it reaches a capacity above 40% for CPU and RAM and 99% maximum on graphics processing unit (GPU). Quality of service testing of the system is carried out at bandwidths of 5, 10, and 30 Mbps with a minimum bandwidth of 10 Mbps. In general, there are a little difference that occurred between test game and different bandwidths.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography