To see the other types of publications on this topic, follow the link: File system performance.

Journal articles on the topic 'File system performance'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'File system performance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Needhi, Jeyadev, Ram Prasath G, Vishnu G, and Deepesh Vikram KK. "Performance Optimization of Voice-Assisted File Management Systems." International Journal of Engineering and Computer Science 13, no. 07 (2024): 26250–56. http://dx.doi.org/10.18535/ijecs/v13i07.4854.

Full text
Abstract:
In this paper, we present a novel approach for managing the file system in Linux using a voice assistant. Our system allows users to perform file system operations such as creating directories, renaming files, and deleting files by issuing voice commands. We develop a voice assistant using Python libraries and integrate it with the file system in Linux. The voice assistant is capable of understanding natural language and executing commands based on the user’s voice inputs. We conduct experiments to evaluate the performance of the system and demonstrate that our approach is effective and effici
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Ya Rong, Pei Rong Wang, and Rui Liu. "Hybrid File System - A Strategy for the Optimization of File System." Advanced Materials Research 734-737 (August 2013): 3129–32. http://dx.doi.org/10.4028/www.scientific.net/amr.734-737.3129.

Full text
Abstract:
The hybrid file system is designed to optimize the latency of the response of File System I/Os and extend the capacity of the local file system to cloud by taking the advantage of Internet. Our hybrid file system is consist of SSD, HDD and Amazon S3 cloud file system. We store small files, directory tree and metadata of all the files in SSD, because SSD has a good performance for the response of small and random I/Os. HDD is good at responding big and sequential I/Os, so we use it just like a warehouse to store big files which are linked by the symbolic files in the SSD. We also extend the loc
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Zhi Hao. "A Log-Structured File System Based on LevelDB." Applied Mechanics and Materials 602-605 (August 2014): 3481–84. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3481.

Full text
Abstract:
Traditional file systems have some shortages in storing small files, such as randomness of data layout, waste of disk space and lack of inode resources. In this thesis, a log-structured file system named LevelFS based on LevelDB is presented. By setting the write buffer, it can make disk randomized writes of small files into disk sequential writes, and reduce the distance of related data, so as to improve the read and write performance of file system. Experiments show that LevelFS can greatly improve read and write performance of small files without affect the large ones.
APA, Harvard, Vancouver, ISO, and other styles
4

Abhilisha Pandurang Bhalke. "Enhancement the Efficiency of Work with P2P System." International Journal for Modern Trends in Science and Technology 06, no. 09 (2020): 68–72. http://dx.doi.org/10.46501/ijmtst060911.

Full text
Abstract:
The P2P system should be used Proximity information to minimize the load of file request and improve the efficiency of the work .Clustering peers for their physical proximity can also rise the performance of the request file. However, very few currently work in a peer group based on demands as peers on physical proximity. Although structured P2P provides more efficient files requests than unstructured P2P, it is difficult to apply because of their strictly defined topology. In this work, we intending to introduce a system for exchange a P2P file for proximity and level of interest based on str
APA, Harvard, Vancouver, ISO, and other styles
5

He, Qin Lu, Zhan Huai Li, Le Xiao Wang, Hui Feng Wang, and Jian Sun. "Performance Measurement Technique of Cloud Storage System." Advanced Materials Research 760-762 (September 2013): 1197–201. http://dx.doi.org/10.4028/www.scientific.net/amr.760-762.1197.

Full text
Abstract:
Researches on technologies about testing aggregate bandwidth of file systems in cloud storage systems. Through the memory file system, network file system, parallel file system theory analysis, according to the cloud storage system polymerization bandwidth and concept, developed to cloud storage environment file system polymerization bandwidth test software called FSPoly. In this paper, use FSpoly to luster file system testing, find reasonable test methods, and then evaluations latest development in cloud storage system file system performance by using FSPoly.
APA, Harvard, Vancouver, ISO, and other styles
6

Prajwal Said, Ketaki Naik, Nupur Agrawal, Srushti Bhoite, and Sayali Shelar. "Apriori-Based Prefetching Files for Caching." International Research Journal on Advanced Engineering and Management (IRJAEM) 6, no. 07 (2024): 2348–53. http://dx.doi.org/10.47392/irjaem.2024.0339.

Full text
Abstract:
The project proposes an innovative solution aimed at optimizing file system performance through predictive caching techniques integrated with a Graphical User Interface (GUI). The GUI facilitates user interaction by offering functionalities such as browsing files and displaying performance metrics via graphical representations of bandwidth and Input/Output Operations Per Second (IOPS). The functionality revolves around dynamically determining file placement on Solid State Drives (SSDs) or Hard Disk Drives (HDDs). The system employs predictive caching to identify frequently accessed files, ensu
APA, Harvard, Vancouver, ISO, and other styles
7

Shi, Ruizhe, Ruizhi Cheng, Bo Han, Yue Cheng, and Songqing Chen. "A Closer Look into IPFS: Accessibility, Content, and Performance." Proceedings of the ACM on Measurement and Analysis of Computing Systems 8, no. 2 (2024): 1–31. http://dx.doi.org/10.1145/3656015.

Full text
Abstract:
The InterPlanetary File System (IPFS) has recently gained considerable attention. While prior research has focused on understanding its performance characterization and application support, it remains unclear: (1) what kind of files/content are stored in IPFS, (2) who are providing these files, (3) are these files always accessible, and (4) what affects the file access performance. To answer these questions, in this paper, we perform measurement and analysis on over 4 million files associated with CIDs (content IDs) that appeared in publicly available IPFS datasets. Our results reveal the foll
APA, Harvard, Vancouver, ISO, and other styles
8

Shi, Ruizhe, Ruizhi Cheng, Bo Han, Yue Cheng, and Songqing Chen. "A Closer Look into IPFS: Accessibility, Content, and Performance." ACM SIGMETRICS Performance Evaluation Review 52, no. 1 (2024): 77–78. http://dx.doi.org/10.1145/3673660.3655040.

Full text
Abstract:
The InterPlanetary File System (IPFS) has recently gained considerable attention. While prior research has focused on understanding its performance characterization and application support, it remains unclear: (1) what kind of files/content are stored in IPFS, (2) who are providing these files, (3) are these files always accessible, and (4) what affects the file access performance. To answer these questions, in this paper, we perform measurement and analysis on over 4 million files associated with CIDs (content IDs) that appeared in publicly available IPFS datasets. Our results reveal the foll
APA, Harvard, Vancouver, ISO, and other styles
9

Dabre, Sandip, Prof. Sachin Vyawahare, and Prof. Pallavi P. Rane. "Designing an Effortless Local and Cloud File Management for Synchronization of File System with Conflict Resolution." International Journal of Ingenious Research, Invention and Development (IJIRID) 3, no. 3 (2024): 218–27. https://doi.org/10.5281/zenodo.11237145.

Full text
Abstract:
<em>File synchronization is the act of guaranteeing that two or more sites have identical and current files. Efficient file management between local and cloud storage systems is crucial. Nevertheless, the process of file synchronization might be arduous as a result of many circumstances, including network latency, bandwidth restrictions, file conflicts, and user preferences. This work presents a complete file synchronization method capable of managing local and cloud files effortlessly, accommodating various circumstances and requirements, and resolving conflicts. The algorithm comprises four
APA, Harvard, Vancouver, ISO, and other styles
10

Bartus, Paul. "Using Hadoop Distributed and Deduplicated File System (HD2FS) in Astronomy." Proceedings of the International Astronomical Union 15, S367 (2019): 464–66. http://dx.doi.org/10.1017/s1743921321000387.

Full text
Abstract:
AbstractDuring the last years, the amount of data has skyrocketed. As a consequence, the data has become more expensive to store than to generate. The storage needs for astronomical data are also following this trend. Storage systems in Astronomy contain redundant copies of data such as identical files or within sub-file regions. We propose the use of the Hadoop Distributed and Deduplicated File System (HD2FS) in Astronomy. HD2FS is a deduplication storage system that was created to improve data storage capacity and efficiency in distributed file systems without compromising Input/Output perfo
APA, Harvard, Vancouver, ISO, and other styles
11

Khashan, Osama A., Nour M. Khafajah, Waleed Alomoush, et al. "Dynamic Multimedia Encryption Using a Parallel File System Based on Multi-Core Processors." Cryptography 7, no. 1 (2023): 12. http://dx.doi.org/10.3390/cryptography7010012.

Full text
Abstract:
Securing multimedia data on disk drives is a major concern because of their rapidly increasing volumes over time, as well as the prevalence of security and privacy problems. Existing cryptographic schemes have high computational costs and slow response speeds. They also suffer from limited flexibility and usability from the user side, owing to continuous routine interactions. Dynamic encryption file systems can mitigate the negative effects of conventional encryption applications by automatically handling all encryption operations with minimal user input and a higher security level. However, m
APA, Harvard, Vancouver, ISO, and other styles
12

Sterniczuk, Bartosz. "Comparison of EXT4 and NTFS filesystem performance." Journal of Computer Sciences Institute 25 (December 30, 2022): 297–300. http://dx.doi.org/10.35784/jcsi.3004.

Full text
Abstract:
The aim of this article is to compare the two most popular and competing file systems – EXT4 and NTFS in Ubutnu with the use of SSD disk.. At the beginning of this article, a critical description of the literature was made, explaining the purposefulness of the research undertaken. Additionally, the basics of the operation of both discussed file systems are explained. The research consisted of copying files between two partitions and measuring the time of this operation using a specially developed system shell script in the bash language. The conducted research has shown that the EXT4 system is
APA, Harvard, Vancouver, ISO, and other styles
13

Matsuda, Yusuke, Masahiro Sasabe, and Tetsuya Takine. "Evolutionary Game Theory-Based Evaluation of P2P File-Sharing Systems in Heterogeneous Environments." International Journal of Digital Multimedia Broadcasting 2010 (2010): 1–12. http://dx.doi.org/10.1155/2010/369814.

Full text
Abstract:
Peer-to-Peer (P2P) file sharing is one of key technologies for achieving attractive P2P multimedia social networking. In P2P file-sharing systems, file availability is improved by cooperative users who cache and share files. Note that file caching carries costs such as storage consumption and processing load. In addition, users have different degrees of cooperativity in file caching and they are in different surrounding environments arising from the topological structure of P2P networks. With evolutionary game theory, this paper evaluates the performance of P2P file sharing systems in such het
APA, Harvard, Vancouver, ISO, and other styles
14

Win, Myat Thu, Lai Win Tin, and Mu Tyar Su. "Performance Comparison of File Security System using TEA and Blowfish Algorithms." International Journal of Trend in Scientific Research and Development 3, no. 5 (2019): 871–77. https://doi.org/10.5281/zenodo.3589817.

Full text
Abstract:
With the progress in data exchange by the electronic system, the need for information security has become a necessity. Due to the growth of multimedia application, security becomes an important issue of communication and storage of different files. To make its reality, cryptographic algorithms are widely used as essential tools. Cryptographic algorithms provide security services such as confidentiality, authentication, data integrity and secrecy by encryption. Different cryptographic algorithms are commonly used for information security in many research areas. Although there are two encryption
APA, Harvard, Vancouver, ISO, and other styles
15

Mohammad, Bahjat Al-Masadeh, Sanusi Azmi Mohd, and Sakinah Syed Ahmad Sharifah. "Tiny datablock in saving Hadoop distributed file system wasted memory." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 2 (2023): 1757–72. https://doi.org/10.11591/ijece.v13i2.pp1757-1772.

Full text
Abstract:
Hadoop distributed file system (HDFS) is the file system whereby Hadoop is use it to store all the upcoming data inside it. Since it been declared, HDFS is consuming a huge memory amount in order to serve a normal dataset. Nonetheless, the current file saving mechanism in HDFS save only one file in one datablock. Thus, a file with just 5 Mb in size will take up the whole datablock capacity causing the rest of the memory unavailable for other upcoming files, and this is considered a huge waste of memory in serving a normal size dataset. This paper proposed a method called tiny datablockHDFS (TD
APA, Harvard, Vancouver, ISO, and other styles
16

Gao, Wei Feng, Tie Zhu Zhao, and Ming Bin Lin. "An Adaptive Performance Prediction Method of Distributed File System Based on Performance Correlation." Advanced Materials Research 998-999 (July 2014): 1362–65. http://dx.doi.org/10.4028/www.scientific.net/amr.998-999.1362.

Full text
Abstract:
Distributed file systems are emerging as a key component of large scale cloud storage platform due to the continuous growth of the amount of application data. Performance modeling and analysis is an important concern in the distributed file system area. This paper focuses on the performance prediction and modeling issues. An adaptive prediction model (APModel) is proposed to predict the performance of distributed file systems by capturing the performance correlation of different performance factors. We perform a series of experiments to validate the proposed prediction model. The experiment re
APA, Harvard, Vancouver, ISO, and other styles
17

Huo, Qiu Yan, and Yu Zhang. "Semi-Preemptible Range Lock in Parallel Network File System (pNFS)." Advanced Materials Research 546-547 (July 2012): 1250–55. http://dx.doi.org/10.4028/www.scientific.net/amr.546-547.1250.

Full text
Abstract:
Distributed file systems use file lock mechanism to ensure consistency when the shared data are accessed by multiple nodes. In this paper, using the feature of distributed systems that the same file would be accessed frequently and the advantage of high concurrency of range lock, the semi-preemptible range lock for pNFS is proposed. Clients locally cache the finer-grained locks for ranges of files they hold. Clients retain or cache range locks even without the file instances. When an access lock is cached, a client answers some requests without a server message, improving performance by exploi
APA, Harvard, Vancouver, ISO, and other styles
18

Rawat, U. S., and Shishir Kumar. "ECFS." International Journal of Information Security and Privacy 6, no. 2 (2012): 53–63. http://dx.doi.org/10.4018/jisp.2012040104.

Full text
Abstract:
Proposed is a secure and efficient approach for designing and implementing an enterprise-class cryptographic file system for Linux (ECFS) in kernel-space. It uses stackable file system interface to introduce a layer for encrypting files using symmetric keys, and public-key cryptography for user authentication and file sharing, like other existing enterprise-class cryptographic file systems. It differs itself from existing systems by including all public-key cryptographic operations and public-key infrastructure (PKI) support in kernel-space that protects it from attacks that may take place wit
APA, Harvard, Vancouver, ISO, and other styles
19

Yuwono, Doddy Teguh, Abdul Fadlil, and Sunardi Sunardi. "Performance Comparison of Forensic Software for Carving Files using NIST Method." Jurnal Teknologi dan Sistem Komputer 7, no. 3 (2019): 89–92. http://dx.doi.org/10.14710/jtsiskom.7.3.2019.89-92.

Full text
Abstract:
Data lost due to the fast format or system crash will remain in the media sector of storage. Digital forensics needs proof and techniques for retrieving data lost in storage. This research studied the performance comparison of open-source forensic software for data retrieval, namely Scalpel, Foremost, and Autopsy, using the National Institute of Standards Technology (NIST) forensic method. The testing process was carried out using the file carving technique. The carving file results are analyzed based on the success rate (accuracy) of the forensic tools used in returning the data. Scalpel perf
APA, Harvard, Vancouver, ISO, and other styles
20

Alkenani, Jawad, and Khulood Ahmed Nassar. "Enhanced system for ns2 trace file analysis with network performance evaluation." Iraqi Journal of Intelligent Computing and Informatics (IJICI) 1, no. 2 (2022): 119–30. http://dx.doi.org/10.52940/ijici.v1i2.22.

Full text
Abstract:
One of the critical challenges facing operators around the world is how to ensure that everything is running smoothly as well as how to analyze the performance of the network. Nonetheless, the analytic system must be precise, user-friendly, and quick enough to depict network performance in real time. Network performance is essential for ensuring service quality. In this light, the Network Simulator NS-2 is generally employed for network research on the widely-used UNIX and Windows systems. Next, the network scenarios are generated using network simulation scripts, and upon completion of the si
APA, Harvard, Vancouver, ISO, and other styles
21

Jayakumar, N., and A. M. Kulkarni. "A Simple Measuring Model for Evaluating the Performance of Small Block Size Accesses in Lustre File System." Engineering, Technology & Applied Science Research 7, no. 6 (2017): 2313–18. http://dx.doi.org/10.48084/etasr.1557.

Full text
Abstract:
Storage performance is one of the vital characteristics of a big data environment. Data throughput can be increased to some extent using storage virtualization and parallel data paths. Technology has enhanced the various SANs and storage topologies to be adaptable for diverse applications that improve end to end performance. In big data environments the mostly used file systems are HDFS (Hadoop Distributed File System) and Lustre. There are environments in which both HDFS and Lustre are connected, and the applications directly work on Lustre. In Lustre architecture with out-of-band storage vir
APA, Harvard, Vancouver, ISO, and other styles
22

Jayakumar, N., and A. M. Kulkarni. "A Simple Measuring Model for Evaluating the Performance of Small Block Size Accesses in Lustre File System." Engineering, Technology & Applied Science Research 7, no. 6 (2017): 2313–18. https://doi.org/10.5281/zenodo.1118996.

Full text
Abstract:
Storage performance is one of the vital characteristics of a big data environment. Data throughput can be increased to some extent using storage virtualization and parallel data paths. Technology has enhanced the various SANs and storage topologies to be adaptable for diverse applications that improve end to end performance. In big data environments the mostly used file systems are HDFS (Hadoop Distributed File System) and Lustre. There are environments in which both HDFS and Lustre are connected, and the applications directly work on Lustre. In Lustre architecture with out-of-band storage vir
APA, Harvard, Vancouver, ISO, and other styles
23

Cho, Kyungwoon, and Hyokyung Bahn. "A Lightweight File System Design for Unikernel." Applied Sciences 14, no. 8 (2024): 3342. http://dx.doi.org/10.3390/app14083342.

Full text
Abstract:
Unikernels are specialized operating system (OS) kernels optimized for a single application or service, offering advantages such as rapid boot times, high performance, minimal memory usage, and enhanced security compared to general-purpose OS kernels. Unikernel applications must remain compatible with the runtime environment of general-purpose kernels, either through binary or source compatibility. As a result, many Unikernel projects have prioritized system call compatibility over performance enhancements. In this paper, we explore the design principles of Unikernel file systems and introduce
APA, Harvard, Vancouver, ISO, and other styles
24

Al-Saleh, Mohammed, and Hanan Hamdan. "Precise Performance Characterization of Antivirus on the File System Operations." JUCS - Journal of Universal Computer Science 25, no. (9) (2019): 1089–108. https://doi.org/10.3217/jucs-025-09-1089.

Full text
Abstract:
The Antivirus (AV) is of an important concern to the end-users community. Mainly, the AV achieves security by scanning data against its database of virus signatures. In addition, the AV tries to reach a pleasant balance between security and United States of Americability. When to scan data is an important design decision an AV has to make. Because AVs are equipped with on-access scanners that scan files when necessary, we want to have a fine-grained approach that provides us with high precision explanation of the performance impact of the AVs on different file system operations. Microsofts min
APA, Harvard, Vancouver, ISO, and other styles
25

Niu, De Jiao, Tao Cai, Yong Zhao Zhan, and Shi Guang Ju. "Metadata Indexing Sub-System for Distributed File System." Applied Mechanics and Materials 143-144 (December 2011): 864–68. http://dx.doi.org/10.4028/www.scientific.net/amm.143-144.864.

Full text
Abstract:
The efficiency of metadata indexing is important to the performance of distributed file system. Time and space spending of current metadata management algorithms are unstable. In this paper, we use B-tree to index the metadata of distributed file system. Lustre is an open source distributed file system in which Hash function is used to manage the metadata. We implement the prototype of metadata indexing sub-system on Lustre and use Iozone to test the I/O performance of Lustre with and without the metadata indexing sub-system respectively. The simulation results show that Lustre with the metada
APA, Harvard, Vancouver, ISO, and other styles
26

Riatma, Galih Putra, Bagas Satya Dian Nugraha, Anugrah Nur Rahmanto, and Fitri Fitri. "Performance Evaluation and the Impact of File Size on Various AES Encryption Modes." JURNAL JARTEL: Jurnal Jaringan Telekomunikasi 15, no. 2 (2025): 121–28. https://doi.org/10.33795/jartel.v15i2.7323.

Full text
Abstract:
Important factors in the system are performance and information security. A secure system does not necessarily have fast performance because it takes time for encryption processing that takes time. For that, the system must use an encryption mode that suits the needs. This study measures the encryption performance of five AES methods, namely AES-ECB, AES-SIV, AES-CBC, AES-EAX, and AES-GCM on text data, and images with sizes of 1KB, 10KB, 100KB, and 1000KB. Performance testing is carried out using the same hardware and software to ensure consistency. From the analysis results, it was found that
APA, Harvard, Vancouver, ISO, and other styles
27

da Silva, Erico Correia, Liria Matsumoto Sato, and Edson Toshimi Midorikawa. "Distributed File System to Leverage Data Locality for Large-File Processing." Electronics 13, no. 1 (2023): 106. http://dx.doi.org/10.3390/electronics13010106.

Full text
Abstract:
Over the past decade, significant technological advancements have led to a substantial increase in data proliferation. Both scientific computation and Big Data workloads play a central role, manipulating massive data and challenging conventional high-performance computing architectures. Efficiently processing voluminous files using cost-effective hardware remains a persistent challenge, limiting access to new technologies for individuals and organizations capable of higher investments. In response to this challenge, AwareFS, a novel distributed file system, addresses the efficient reading and
APA, Harvard, Vancouver, ISO, and other styles
28

Klauser, Artur, and Reinhard Posch. "Distributed Caching in Networked File Systems." JUCS - Journal of Universal Computer Science 1, no. (6) (1995): 399–409. https://doi.org/10.3217/jucs-001-06-0399.

Full text
Abstract:
Changing relative performance of processors, networks, and disks makes it necessary to reconsider algorithms using these three resources. As networks get faster and less congested topologies emerge, it becomes important to use network resources more aggressively to obtain good performance. Substitution of local disk accesses by accesses to remote memory can lead to better balanced resource usage and thus to faster systems. In this work we address the issue of file caching in a networked file system configuration. Distributed block-level in-memory caches are considered. We show that carefully c
APA, Harvard, Vancouver, ISO, and other styles
29

Wu, Qi Meng, Ke Xie, Ming Fa Zhu, Li Min Xiao, and Li Ruan. "DMFSsim: A Distributed Metadata File System Simulator." Applied Mechanics and Materials 241-244 (December 2012): 1556–61. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.1556.

Full text
Abstract:
Parallel file systems deploy multiple metadata servers to distribute heavy metadata workload from clients. With the increasing number of metadata servers, metadata-intensive operations are facing some problems related with collaboration among them, compromising the performance gain. Consequently, a file system simulator is very helpful to try out some optimization ideas to solve these problems. In this paper, we propose DMFSsim to simulate the metadata-intensive operations on large-scale distributed metadata file systems. DMFSsim can flexibly replay traces of multiple metadata operations, supp
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Zheng Yuan, and Jing Yin Li. "ARM7-Based File System Design." Applied Mechanics and Materials 55-57 (May 2011): 233–38. http://dx.doi.org/10.4028/www.scientific.net/amm.55-57.233.

Full text
Abstract:
ARM is the newest 32-bite RISC microprocessor. Its high performance, low power consumption and flexible extension make ARM particularly suitable for designing embedded system. Therefore, the realization of the FAT16 file system with ARM can satisfy the demand of file storage in embedded system. This article descripts the simple realization of FAT16 file system with ARM 7.
APA, Harvard, Vancouver, ISO, and other styles
31

Hwa Song, Jong, Se Ho Kim, Song Yi Hwang, Seung Gyu Kim, and Sung Jin Lee. "A study on the APFS timestamps in MACOS." International Journal of Engineering & Technology 7, no. 3.3 (2018): 133. http://dx.doi.org/10.14419/ijet.v7i2.33.13870.

Full text
Abstract:
Background/Objectives: There are not many time analysis studies on High Sierra, the latest macOS (10.13) that has changed the file system from HFS+ toAPFS (Apple File System).Methods/Statistical analysis: In this experiment, we tried various actions of the file and the directory with using the Sierra version of the internal drive and the High Sierra version of the external drive. The ‘mdls’ command and the time attributes of the Finder are used for comparing the metadata.The ‘log show’ command is also used for checking the system time modification. For analyzing the .DS_Store and the db.sqlite
APA, Harvard, Vancouver, ISO, and other styles
32

Pavlyk, S. V., O. L. Lashko, and D. O. Kushnir. "PRINCIPLES OF DESIGNING AND IMPLEMENTING SYSTEM OF AUTOMATED FILE DELETION AND CONTROL FOR WINDOWS OS." Computer systems and network 6, no. 2 (2024): 172. https://doi.org/10.23939/csn2024.02.172.

Full text
Abstract:
The article examines the file system at the kernel level of the operating system. It addresses the primary issues related to personal data loss and protection and the general challenges of filtering content stored on users' computers. The analysis reveals that increasing personal data is being lost or leaked from personal computers without users' knowledge. It also shows that many files stored on users' computers are potentially dangerous or unnecessary. The article emphasizes the development of an effective software solution to tackle the issue of filtering content on users' personal computer
APA, Harvard, Vancouver, ISO, and other styles
33

Pavlyk, S. V., O. L. Lashko, and D. O. Kushnir. "PRINCIPLES OF DESIGNING AND IMPLEMENTING SYSTEM OF AUTOMATED FILE DELETION AND CONTROL FOR WINDOWS OS." Computer systems and network 6, no. 2 (2024): 171–78. https://doi.org/10.23939/csn2024.02.171.

Full text
Abstract:
The article examines the file system at the kernel level of the operating system. It addresses the primary issues related to personal data loss and protection and the general challenges of filtering content stored on users' computers. The analysis reveals that increasing personal data is being lost or leaked from personal computers without users' knowledge. It also shows that many files stored on users' computers are potentially dangerous or unnecessary. The article emphasizes the development of an effective software solution to tackle the issue of filtering content on users' personal computer
APA, Harvard, Vancouver, ISO, and other styles
34

Woods, Kam, and Geoffrey Brown. "Migration Performance for Legacy Data Access." International Journal of Digital Curation 3, no. 2 (2008): 74–88. http://dx.doi.org/10.2218/ijdc.v3i2.59.

Full text
Abstract:
We present performance data relating to the use of migration in a system we are creating to provide web access to heterogeneous document collections in legacy formats. Our goal is to enable sustained access to collections such as these when faced with increasing obsolescence of the necessary supporting applications and operating systems. Our system allows searching and browsing of the original files within their original contexts utilizing binary images of the original media. The system uses static and dynamic file migration to enhance collection browsing, and emulation to support both the use
APA, Harvard, Vancouver, ISO, and other styles
35

S.Revathi*1, M.Sathya2 &. G.Simi Margarat3. "REARED PARSIMONIOUS FILE SYSTEM BY CLOUD." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 7, no. 3 (2018): 827–31. https://doi.org/10.5281/zenodo.1207830.

Full text
Abstract:
With simple approach interfaces and flexible billing models, cloud storage has become an interesting solution to simplify the storage management for both business and individual users. However, old file systems with large advantage for local disk-based storage backend cannot fully accomplish the internal features of the cloud to obtain good performance. In this paper, we present the design, implementation, and decision of Coral, a cloud based file system that beat a balance between performance and financial cost. Unlike previous studies that treat cloud storage as just a normal backend of exis
APA, Harvard, Vancouver, ISO, and other styles
36

Ramaswamy, Mithilesh. "Efficient File Security Scanning: Combining Hash Histories and Bloom Filters to Minimize Redundancy." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 12 (2024): 1–6. https://doi.org/10.55041/ijsrem12781.

Full text
Abstract:
As file repositories expand in size, traditional full-file security scanning becomes computationally expensive and redundant. This paper introduces a novel hybrid framework that leverages hash histories embedded in file metadata and Bloom filters for efficient security scanning. The approach ensures that only modified or newly added files are scanned, reducing overhead while maintaining robust security coverage. By augmenting file metadata with hash histories, the system provides decentralized tracking of file state changes. Bloom filters further optimize the process by efficiently determining
APA, Harvard, Vancouver, ISO, and other styles
37

Jiang, Yi, Qiang Xiao, Rong Huang, and An Ping Xiong. "The Metadata Dynamic Load-Balancing Strategy of Distributed Filesystem Based on Hash Tags." Applied Mechanics and Materials 556-562 (May 2014): 4009–13. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.4009.

Full text
Abstract:
With the development of information technology, distributed file system is widely used in massive information storage. Usually, distributed file system uses metadata server to achieve quick access to files according to directory, thus the organization and management of metadata are the keys to the file system performance. In general, directory subtree partition method and hash algorithm are used by existing mass storage system to manage metadata. However, to solve the problems, like low access efficiency of metadata, ineffective balance of load and poor extensibility, in existing metadata mana
APA, Harvard, Vancouver, ISO, and other styles
38

Ochilov, Nizomiddin. "Creating Secure File Systems in Open-Source Operating Systems." WSEAS TRANSACTIONS ON SYSTEMS 21 (November 24, 2022): 221–32. http://dx.doi.org/10.37394/23202.2022.21.24.

Full text
Abstract:
The relevance of this study is determined by insecure data storage on personal computers, as it is the main operating system that performs authentication and file access control. Bypassing these security rules is possible in case of using another open-source operating system on the same personal computer. The aim of this work is the research and development of file encryptors, disk encryptors and file system encryptors. Each of them has its shortcomings which manifest themselves during development. Combining the advantages of file encryptors and file system encryptors helped to overcome those
APA, Harvard, Vancouver, ISO, and other styles
39

Riyahi, Abdullah Mahmoud, Amr Bashiri, Khalid Alshahrani, Saad Alshahrani, Hadi M. Alamri, and Dina Al-Sudani. "Cyclic Fatigue Comparison of TruNatomy, Twisted File, and ProTaper Next Rotary Systems." International Journal of Dentistry 2020 (February 26, 2020): 1–4. http://dx.doi.org/10.1155/2020/3190938.

Full text
Abstract:
TruNatomy (TN; Dentsply Sirona, Maillefer, Ballaigues, Switzerland) is a newly released system that was not tested in any previous studies. The objective of this work is to evaluate cyclic fatigue resistance of the new file and compare it with the Twisted Files (TF) and ProTaper Next (PTN). Forty-five files were distributed into 3 groups: PTN X2 (size 25 and taper 0.06), TF (size 25 and taper 0.06), and TN prime file (size 26 and taper 0.04). Each group included 15 files. Lengths of all files were 25 mm. Cyclic fatigue testing was done using artificial stainless-steel canals with 60-degree cur
APA, Harvard, Vancouver, ISO, and other styles
40

Ullah, Zahid, Sohail Jabbar, Muhammad Haris bin Tariq Alvi, and Awais Ahmad. "Analytical Study on Performance, Challenges and Future Considerations of Google File System." International Journal of Computer and Communication Engineering 3, no. 4 (2014): 279–84. http://dx.doi.org/10.7763/ijcce.2014.v3.336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Ru, Yuhui Deng, Yi Zhou, and Ping Huang. "Boosting the Restoring Performance of Deduplication Data by Classifying Backup Metadata." ACM/IMS Transactions on Data Science 2, no. 2 (2021): 1–16. http://dx.doi.org/10.1145/3437261.

Full text
Abstract:
Restoring data is the main purpose of data backup in storage systems. The fragmentation issue, caused by physically scattering logically continuous data across a variety of disk locations, poses a negative impact on the restoring performance of a deduplication system. Rewriting algorithms are used to alleviate the fragmentation problem by improving the restoring speed of a deduplication system. However, rewriting methods give birth to a big sacrifice in terms of deduplication ratio, leading to a huge storage space waste. Furthermore, traditional backup approaches treat file metadata and chunk
APA, Harvard, Vancouver, ISO, and other styles
42

Lingala, Arjun Reddy. "Comparison of Table Formats for Data warehouse." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 12 (2024): 1–9. https://doi.org/10.55041/ijsrem15425.

Full text
Abstract:
Abstract—Modern data warehouses are developed on dis- tributed file system and object storage that offers scalability, data availability and performance. Table formats define how the data files are organized and stored on the file system. The evolution of data warehousing has given rise to diverse table formats with unique architectures and capabilities aiming at query performance, scalability and storage optimization. Hive table format is the foundational component of Hadoop ecosystem which uses centralized metastore and manual partitioning but the query performance is hindered in cases requi
APA, Harvard, Vancouver, ISO, and other styles
43

Joshi, Brijesh Y., Poornashankar ., and Deepali Sawai. "Performance Tuning Of Apache Spark Framework In Big Data Processing with Respect To Block Size And Replication Factor." SAMRIDDHI : A Journal of Physical Sciences, Engineering and Technology 14, no. 02 (2022): 152–58. http://dx.doi.org/10.18090/samriddhi.v14i02.4.

Full text
Abstract:
Apache Spark has recently become the most popular big data analytics framework. Default configurations are provided by Spark. HDFS stands for Hadoop Distributed File System. It means the large files will be physically stored on multiple nodes in a distributed fashion. The block size determines how large files are distributed, while the replication factor determines how reliable the files are. If there is just one copy of each block for a given file and the node fails, the data in the files become unreadable. The block size and replication factor are configurable per file. The results and analy
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Jun, Changsen Pan, and Menghan Lu. "A Seismic Data Processing System based on Fast Distributed File System." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 14, no. 5 (2015): 5779–88. http://dx.doi.org/10.24297/ijct.v14i5.3986.

Full text
Abstract:
Big data has attracted an increasingly number of attentions with the advent of the cloud era, and in the field of seismic exploration, the amount of data created by seismic exploration has also experienced an incredible growth in order to satisfy the social needs. In this case, it is necessary to build a highly-effective system of data storage and process. In our paper, we aim at the properties of the seismic data and the requirement to the performance of IO, and establish a distributed file system with the goal of processing seismic data based on the Fast Distributed File System (Fast DFS), t
APA, Harvard, Vancouver, ISO, and other styles
45

K., Srikanth* P. Venkateswarlu Ashok Suragala. "A FUNDAMENTAL CONCEPT OF MAPREDUCE WITH MASSIVE FILES DATASET IN BIG DATA USING HADOOP PSEUDO-DISTRIBUTION MODE." Global Journal of Engineering Science and Research Management 4, no. 5 (2017): 58–62. https://doi.org/10.5281/zenodo.801301.

Full text
Abstract:
Hadoop Distributed File System (HDFS) and MapReduce programming model is used for storage and retrieval of the big data. Big data can be any structured collection which results incapability of conventional data management methods. The Tera Bytes size file can be easily stored on the HDFS and can be analyzed with MapReduce. This paper provides introduction to Hadoop HDFS and MapReduce for storing large number of files and retrieve information from these files. In this paper we present our experimental work done on Hadoop by applying a number of files as input to the system and then analyzing th
APA, Harvard, Vancouver, ISO, and other styles
46

Zhu, Bohong, Youmin Chen, Qing Wang, Youyou Lu, and Jiwu Shu. "Octopus + : An RDMA-Enabled Distributed Persistent Memory File System." ACM Transactions on Storage 17, no. 3 (2021): 1–25. http://dx.doi.org/10.1145/3448418.

Full text
Abstract:
Non-volatile memory and remote direct memory access (RDMA) provide extremely high performance in storage and network hardware. However, existing distributed file systems strictly isolate file system and network layers, and the heavy layered software designs leave high-speed hardware under-exploited. In this article, we propose an RDMA-enabled distributed persistent memory file system, Octopus + , to redesign file system internal mechanisms by closely coupling non-volatile memory and RDMA features. For data operations, Octopus + directly accesses a shared persistent memory pool to reduce memory
APA, Harvard, Vancouver, ISO, and other styles
47

Suwansrikham, Parinya, She Kun, Shaukat Hayat, and Jehoiada Jackson. "Dew Computing and Asymmetric Security Framework for Big Data File Sharing." Information 11, no. 6 (2020): 303. http://dx.doi.org/10.3390/info11060303.

Full text
Abstract:
Due to the advancement of technology, devices generate huge working files at a rapid rate. These data, which are of considerable scale and are created very fast, can be called big data. Keeping such files in one storage device is impossible. Therefore, a large file size is recommended for storage in a cloud storage service. Although this concept is a solution to solve the storage problem, it still faces challenges in terms of reliability and security. The main issues are the unreliability of single cloud storage when its service is down, and the risk of insider attack from the storage service.
APA, Harvard, Vancouver, ISO, and other styles
48

Samrithi, Yuvaraj, and S. Delphine Priscilla Antony Dr. "CONTINUOUS V/S RECIPROCATING FILE MOTION – A REVIEW." International Journal of Multidisciplinary Research and Modern Education 3, no. 1 (2017): 262–64. https://doi.org/10.5281/zenodo.569091.

Full text
Abstract:
<strong>Aim: </strong>To review the continuous and reciprocating file motions in endodontics <strong>Back Ground: </strong>The first endodontic file was crafted in the mid 1800’s. Thiis was basically a rudimentary k-file. K-files are, to these days. The most commonly used hand files in clinical practice. Automated instrumentation of the root canal was an early objective of clinical endodontics. Around 1992-1993, the first rotary NiTi instrument was introduced. These rotary NiTi instruments have undergone various modifications over the years to make them more effective and also to enhance their
APA, Harvard, Vancouver, ISO, and other styles
49

Voshishma, Aleti, Sanjeev Kunhappan, Shruti Sial, Diksha Maheshwari, Ashutosh Shandilya, and Amaravai Ankita Reddy. "Comparative evaluation of remaining dentin thickness and root canal transportation in curved canals with newer nickel–titanium single and multiple rotary file systems: A cone-beam computed tomography study." Journal of Conservative Dentistry and Endodontics 27, no. 10 (2024): 1042–47. http://dx.doi.org/10.4103/jcde.jcde_539_24.

Full text
Abstract:
Abstract Context: The purpose of this study was to evaluate and compare the remaining dentin thickness and root canal transportation of WaveOne GOLD, XP-endo Shaper, and GenEndo file systems to assess their performance in curved canals using cone-beam computed tomography (CBCT) imaging. Materials and Methods: Seventy-five extracted maxillary first molars were selected with a curvature ranging between 15° and 30°. The samples were allocated into three groups (n = 25) and shaped using WaveOne GOLD, XP-endo Shaper, and GenEndo files. CBCT images were captured before and after instrumentation. Sta
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Bo, Ya Yao Zuo, and Zu Chuan Zhang. "Research and Improvement of the Hot Small File Storage Performance under HDFS." Advanced Materials Research 756-759 (September 2013): 1450–54. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.1450.

Full text
Abstract:
In order to deal with a large number of small files and hotspot data program in Hadoop distributed file system (HDFS)[1,, according to the exit proposal, this paper proposes a new the hotspot data processing model. The model proposals to change the block size, the introduction of efficient indexing mechanism to improve the dynamic replica management strategy and design of the new HDFS architecture to save space, speed up system processing, and enhance security.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!