To see the other types of publications on this topic, follow the link: Cluster File System.

Journal articles on the topic 'Cluster File System'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Cluster File System.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Abhilisha Pandurang Bhalke. "Enhancement the Efficiency of Work with P2P System." International Journal for Modern Trends in Science and Technology 06, no. 09 (2020): 68–72. http://dx.doi.org/10.46501/ijmtst060911.

Full text
Abstract:
The P2P system should be used Proximity information to minimize the load of file request and improve the efficiency of the work .Clustering peers for their physical proximity can also rise the performance of the request file. However, very few currently work in a peer group based on demands as peers on physical proximity. Although structured P2P provides more efficient files requests than unstructured P2P, it is difficult to apply because of their strictly defined topology. In this work, we intending to introduce a system for exchange a P2P file for proximity and level of interest based on str
APA, Harvard, Vancouver, ISO, and other styles
2

Karresand, Martin, Stefan Axelsson, and Geir Olav Dyrkolbotn. "Disk Cluster Allocation Behavior in Windows and NTFS." Mobile Networks and Applications 25, no. 1 (2019): 248–58. http://dx.doi.org/10.1007/s11036-019-01441-1.

Full text
Abstract:
AbstractThe allocation algorithm of a file system has a huge impact on almost all aspects of digital forensics, because it determines where data is placed on storage media. Yet there is only basic information available on the allocation algorithm of the currently most widely spread file system; NTFS. We have therefore studied the NTFS allocation algorithm and its behavior empirically. To do that we used two virtual machines running Windows 7 and 10 on NTFS formatted fixed size virtual hard disks, the first being 64 GiB and the latter 1 TiB in size. Files of different sizes were written to disk
APA, Harvard, Vancouver, ISO, and other styles
3

Shekhanin, K. Yu, Yu I. Gorbenko, L. O. Gorbachova, and A. A. Kuznetsov. "Study of storage devices properties for steganographic data hiding in cluster file systems." Radiotekhnika, no. 203 (December 23, 2020): 109–20. http://dx.doi.org/10.30837/rt.2020.4.203.10.

Full text
Abstract:
Methods for technical steganography have been developed in recent years. Hiding of information in such systems is achieved by using properties artificially created by human while constructing various technical means. An example of technical steganography is the application of the features of constructing clustered file systems. This makes it possible to hide information effectively by changing the alternation of individual clusters, the so-called сover files. The names of such files are the key information and it is extremely difficult to recover a hidden message without links (i.e. without na
APA, Harvard, Vancouver, ISO, and other styles
4

Shekhanin, Kirill, Lyudmila Gorbachova, and Kuznetsova Kuznetsova. "Comparative analysis and study of the properties of information carriers for steganographic data hiding in clustered file sys-tems." Computer Science and Cybersecurity, no. 1 (2021): 37–49. http://dx.doi.org/10.26565/2519-2310-2021-1-03.

Full text
Abstract:
The paper studies and analyzes various modern information storage technologies, namely HDD, Flash-USB, SSD. We`ve analyzed different indicators such as the number of implemented products, price, speed of reading and writing. Besides, we`ve considered some indicators of the information carriers’ efficiency in terms of view of the possibility of using steganographic methods for hiding information in clustered file systems. It have been analyzed the speed of sequential reading / writing and the speed of access to a random cluster, corresponding to the speed of access to a fragmented file. For thi
APA, Harvard, Vancouver, ISO, and other styles
5

Awangga, Rolly Maulana, Syafrial Fachri Pane, and Cahya Kurniawan. "AMCF : A Novel Archive Modeling Based on Data Cluster and Filtering." Technomedia Journal 4, no. 2 (2019): 139–52. http://dx.doi.org/10.33050/tmj.v4i2.815.

Full text
Abstract:
File archiving now needs to be appropriately managed so that it is easy to find and manage. File archiving in question is how to help in the process of finding data with a considerable number, to facilitate the work following the aim to reduce the time the search data can be integrated with the system created. Archiving itself aims to facilitate the management of data that is very diverse and with a large amount, to facilitate the management and also the control carried out. The problem with filing archives, in this case, is the lack of management regarding the correct filing of archives. Cont
APA, Harvard, Vancouver, ISO, and other styles
6

Shekhanin, K. Yu, S. V. Pshenichnaya, and A. A. Kuznetsov. "Investigation of the computational complexity of methods for hiding information in cluster steganosystems." Radiotekhnika, no. 206 (September 24, 2021): 77–87. http://dx.doi.org/10.30837/rt.2021.3.206.07.

Full text
Abstract:
Several methods of technical steganography are currently known. Hiding information in a model in 3D printing, this industry of hiding information has certain advantages and disadvantages, namely: the relatively high cost of creating a hidden message and the difficulty in reading the information. The second area of technical steganography is related to network traffic. In this method, information can be hidden, for example, in the header fields of protocols, or, for example, the transmission of a hidden message by sending packets in a certain sequence. There are also methods of hiding informati
APA, Harvard, Vancouver, ISO, and other styles
7

ten Klooster, Iris, Matthijs Leendert Noordzij, and Saskia Marion Kelders. "Exploring How Professionals Within Agile Health Care Informatics Perceive Visualizations of Log File Analyses: Observational Study Followed by a Focus Group Interview." JMIR Human Factors 7, no. 1 (2020): e14424. http://dx.doi.org/10.2196/14424.

Full text
Abstract:
Background An increasing number of software companies work according to the agile software development method, which is difficult to integrate with user-centered design (UCD) practices. Log file analysis may provide opportunities for integrating UCD practices in the agile process. However, research within health care information technology mostly has a theoretical approach and is often focused on the researcher’s interpretation of log file analyses. Objective We aimed to propose a systematic approach to log file analysis in this study and present this to developers to explore how they react an
APA, Harvard, Vancouver, ISO, and other styles
8

Gupta, Manish Kumar, and Rajendra Kumar Dwivedi. "Blockchain Enabled Hadoop Distributed File System Framework for Secure and Reliable Traceability." ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal 12 (December 29, 2023): e31478. http://dx.doi.org/10.14201/adcaij.31478.

Full text
Abstract:
Hadoop Distributed File System (HDFS) is a distributed file system that allows large amounts of data to be stored and processed across multiple servers in a Hadoop cluster. HDFS also provides high throughput for data access. HDFS enables the management of vast amounts of data using commodity hardware. However, security vulnerabilities in HDFS can be manipulated for malicious purposes. This emphasizes the significance of establishing strong security measures to facilitate file sharing within Hadoop and implementing a reliable mechanism for verifying the legitimacy of shared files. The objective
APA, Harvard, Vancouver, ISO, and other styles
9

Amar, Lior, Amnon Barak, and Amnon Shiloh. "The MOSIX Direct File System Access Method for Supporting Scalable Cluster File Systems." Cluster Computing 7, no. 2 (2004): 141–50. http://dx.doi.org/10.1023/b:clus.0000018563.68085.4b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Maurya, Jay. "Contributions to Hadoop File System Architecture by Revising the File System Usage Along with Automatic Service." ECS Transactions 107, no. 1 (2022): 2903–10. http://dx.doi.org/10.1149/10701.2903ecst.

Full text
Abstract:
The usage of unstructured data is becoming obvious by companies and social media is raised heavily from past decade. The sharing of images, audio, and video content by the individual user and corporate can be observed everywhere. The current work focused on the Hadoop framework revision contributions so as to improve the performance of the eco system in the context of space and time parameters. The architecture basically provides the usage of Hadoop Distributed File System (HDFS) and MapReduce (MR). We are proposing certain revision contributions so that the process of importing and processing
APA, Harvard, Vancouver, ISO, and other styles
11

da Silva, Erico Correia, Liria Matsumoto Sato, and Edson Toshimi Midorikawa. "Distributed File System to Leverage Data Locality for Large-File Processing." Electronics 13, no. 1 (2023): 106. http://dx.doi.org/10.3390/electronics13010106.

Full text
Abstract:
Over the past decade, significant technological advancements have led to a substantial increase in data proliferation. Both scientific computation and Big Data workloads play a central role, manipulating massive data and challenging conventional high-performance computing architectures. Efficiently processing voluminous files using cost-effective hardware remains a persistent challenge, limiting access to new technologies for individuals and organizations capable of higher investments. In response to this challenge, AwareFS, a novel distributed file system, addresses the efficient reading and
APA, Harvard, Vancouver, ISO, and other styles
12

O.Pandithurai, Et al. "Hadoop-based File Monitoring System for Processing Image Data." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 2 (2023): 202–5. http://dx.doi.org/10.17762/ijritcc.v11i2.9833.

Full text
Abstract:
This paper presents a file monitoring system based on the Hadoop framework, specifically designed for image data processing. The system comprises a Hadoop cluster and a client, where the Hadoop cluster includes various modules such as a name node module, a name node agent module, data node modules, a matching module, and a response algorithm module. The name node agent module acts as an intermediary between the client and the name node module, forwarding function information and acquiring configuration information. The system provides comprehensive monitoring capabilities for the distributed f
APA, Harvard, Vancouver, ISO, and other styles
13

He, Qinlu, Genqing Bian, Bilin Shao, and Weiqi Zhang. "Research on Multifeature Data Routing Strategy in Deduplication." Scientific Programming 2020 (October 14, 2020): 1–11. http://dx.doi.org/10.1155/2020/8869237.

Full text
Abstract:
Deduplication is a popular data reduction technology in storage systems which has significant advantages, such as finding and eliminating duplicate data, reducing data storage capacity required, increasing resource utilization, and saving storage costs. The file features are a key factor that is used to calculate the similarity between files, but the similarity calculated by the single feature has some limitations especially for the similar files. The storage node feature reflects the load condition of the node, which is the key factor to be considered in the data routing. This paper introduce
APA, Harvard, Vancouver, ISO, and other styles
14

Nadine, Kuhnert, and Maier Andreas. "WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER." International Journal on Natural Language Computing (IJNLC) 8, no. 6 (2022): 14. https://doi.org/10.5281/zenodo.6782982.

Full text
Abstract:
We aim to model an adaptive log file parser. As the content of log files often evolves over time, we established a dynamic statistical model which learns and adapts processing and parsing rules. First, we limit the amount of unstructured text by clustering based on semantics of log file lines. Next, we only take the most relevant cluster into account and focus only on those frequent patterns which lead to the desired output table similar to Vaarandi [10]. Furthermore, we transform the found frequent patterns and the output stating the parsed table into a Hidden Markov Model (HMM). We use this
APA, Harvard, Vancouver, ISO, and other styles
15

Sayalee, Narkhede, and Baraskar Tripti. "HMR LOG ANALYZER: ANALYZE WEB APPLICATION LOGS OVER HADOOP MAPREDUCE." International Journal of Ubiquitous Computing (IJU) 4, no. 3 (2023): 11. https://doi.org/10.5281/zenodo.8386836.

Full text
Abstract:
In today’s Internet world, log file analysis is becoming a necessary task for analyzing the customer’s behavior in order to improve advertising and sales as well as for datasets like environment, medical, banking system it is important to analyze the log data to get required knowledge from it. Web mining is the process of discovering the knowledge from the web data. Log files are getting generated very fast at the rate of 1-10 Mb/s per machine, a single data center can generate tens of terabytes of log data in a day. These datasets are huge. In order to analyze such large datasets
APA, Harvard, Vancouver, ISO, and other styles
16

Chang Gao, Baogang Chen, HaiPing Si. "Research on Improving the Optimization System of Agricultural and Forestry Resource Allocation Management Measures by Using Multi-Objective Optimization Algorithm." Journal of Electrical Systems 20, no. 2 (2024): 483–94. http://dx.doi.org/10.52783/jes.1202.

Full text
Abstract:
At present, there are some problems in the use of forestry resource database, such as large amount of data and data centralization, which leads to the low efficiency of the system. Therefore, an application software platform for forestry informatization is built based on optimized Hadoop cluster. The system consists of four parts: permission management, file management, interface management and data processing. In this paper, a control strategy FDMDR is designed, which implements dynamic copy management according to user access frequency. A method of file access frequency limit based on DMDR i
APA, Harvard, Vancouver, ISO, and other styles
17

Thesma, Vaishnavi, Glen C. Rains, and Javad Mohammadpour Velni. "Development of a Low-Cost Distributed Computing Pipeline for High-Throughput Cotton Phenotyping." Sensors 24, no. 3 (2024): 970. http://dx.doi.org/10.3390/s24030970.

Full text
Abstract:
In this paper, we present the development of a low-cost distributed computing pipeline for cotton plant phenotyping using Raspberry Pi, Hadoop, and deep learning. Specifically, we use a cluster of several Raspberry Pis in a primary-replica distributed architecture using the Apache Hadoop ecosystem and a pre-trained Tiny-YOLOv4 model for cotton bloom detection from our past work. We feed cotton image data collected from a research field in Tifton, GA, into our cluster’s distributed file system for robust file access and distributed, parallel processing. We then submit job requests to our cluste
APA, Harvard, Vancouver, ISO, and other styles
18

Boiko, Maksym, and Viacheslav Moskalenko. "Syntactical method for reconstructing highly fragmented OOXML files." Radioelectronic and Computer Systems, no. 1 (March 7, 2023): 166–82. http://dx.doi.org/10.32620/reks.2023.1.14.

Full text
Abstract:
A common task in computer forensics is to recover files that lack file system metadata. In the case of searching for file fragments in unallocated space, file carving is the most often used method, which is ideal for unfragmented data. However, such methods and the tools based on them are ineffective for recovering OOXML files with a high fragmentation level. These methods do not provide reliable determination of the correct order of fragments. Techniques for reconstructing documents based on the analysis of words and phrases are also ineffective in fragmented OOXML documents. The main reason
APA, Harvard, Vancouver, ISO, and other styles
19

MS., POONAM SHINDE, and MR. V. M. SARDAR PROF. "DATA LOSS PREVENTION IN CONGESTION PRONE WIRELESS SENSOR NETWORK USING VCLR MODEL." JournalNX - a Multidisciplinary Peer Reviewed Journal 3, no. 9 (2017): 52–55. https://doi.org/10.5281/zenodo.1420616.

Full text
Abstract:
In WSN source node generate an event within the cluster at that time this collected information forward towards sink node through cluster head node. Cluster head acts as intermediate node between sink node and source node. The congestion may occur at the cluster head and this prompts to data loss also affect the reliability of the network. In existing system cluster head only performs normal data packet transmission from source node to sink. Due to insufficient buffer size at cluster head node there is more packets drop during the transmission of packets. So, this system gives lower packet del
APA, Harvard, Vancouver, ISO, and other styles
20

M, S. Nirmala. "HBA Distributed Metadata Management for Large Cluster Based Storage Systems." International Journal of Trend in Scientific Research and Development 2, no. 5 (2018): 1966–71. https://doi.org/10.31142/ijtsrd18211.

Full text
Abstract:
An efficient and distributed scheme for file mapping or file lookup is critical in decentralizing metadata management within a group of metadata servers. This paper presents a novel technique called Hierarchical Bloom Filter Arrays HBA to map filenames to the metadata servers holding their metadata. Two levels of probabilistic arrays, namely, the Bloom filter arrays with different levels of accuracies, are used on each metadata server. One array, with lower accuracy and representing the distribution of the entire metadata, trades accuracy for significantly reduced memory overhead, whereas the
APA, Harvard, Vancouver, ISO, and other styles
21

Mardedi, Lalu Zazuli Azhar. "Analisa Kinerja System Gluster FS pada Proxmox VE untuk Menyediakan High Availability." MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer 19, no. 1 (2019): 173–85. http://dx.doi.org/10.30812/matrik.v19i1.473.

Full text
Abstract:
Virtualization is used as a means to improve the scalability of existing hardware. Proxmox Virtual Environment (PVE) with hypervisor type based on open source. PVE can use Network Attached Storage as a network-based storage location in GlusterFS storage, which is a distributed file system. The research methodology uses Network Development Live Cycle (NDLC) which has 3 (three) stages, namely analysis, design, and simulation prototyping. The analysis phase is carried out by collecting data by means of literature study and data analysis. The design phase is carried out making the design of networ
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Zhen Quan, and Bing Pan. "Research of Distributed Search Engine Based on Hadoop." Applied Mechanics and Materials 631-632 (September 2014): 171–74. http://dx.doi.org/10.4028/www.scientific.net/amm.631-632.171.

Full text
Abstract:
Combined with the Map/Reduce programming model, the Hadoop distributed file system, Lucene inverted file indexing technology and ICTCLAS Chinese word segmentation technology, we designed and implemented a distributed search engine system based on Hadoop. By testing of the system in the four-node Hadoop cluster environment, experimental results show that Hadoop platform can be used in search engines to improve system performance, reliability and scalability.
APA, Harvard, Vancouver, ISO, and other styles
23

Yuling, Liu, Song Weiwei, and Ma Xiaoxue. "Load-balance policy in two level-cluster file system." Wuhan University Journal of Natural Sciences 11, no. 6 (2006): 1935–38. http://dx.doi.org/10.1007/bf02831911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zhou, Jiang, Can Ma, Jin Xiong, Weiping Wang, and Dan Meng. "Highly reliable message-passing mechanism for cluster file system." International Journal of Parallel, Emergent and Distributed Systems 28, no. 6 (2013): 556–75. http://dx.doi.org/10.1080/17445760.2012.757316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Aldhmour, Mamoun, Rakan Aldmour, A. Y. Al-Zoubi, and Mohamed Sedky. "Optimizing Off-Chain Storage in Blockchain of Things Systems." International Journal of Online and Biomedical Engineering (iJOE) 21, no. 01 (2025): 118–31. https://doi.org/10.3991/ijoe.v21i01.53157.

Full text
Abstract:
The InterPlanetary File System (IPFS) offers decentralized storage and data sharing, which are critical for the functionality of Blockchain of Things (BCoT) systems. Despite its advantages, IPFS faces challenges such as scalability, latency, and resource management issues that hinder its effective integration into existing blockchain infrastructures. This study explores the implementation of Docker containerization to enhance IPFS performance within BCoT environments. An experimental testbed was established, comprising an IPFS node and an IPFS Cluster peer deployed as Docker containers, to eva
APA, Harvard, Vancouver, ISO, and other styles
26

Jain, R., P. Sarkar, and D. Subhraveti. "GPFS-SNC: An enterprise cluster file system for Big Data." IBM Journal of Research and Development 57, no. 3/4 (2013): 5:1–5:10. http://dx.doi.org/10.1147/jrd.2013.2243531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Aladyshev, O. S., B. M. Shabanov, and A. V. Zakharchenko. "Expectations of the High Performance Computing Cluster File System Selection." Lobachevskii Journal of Mathematics 44, no. 12 (2023): 5132–47. http://dx.doi.org/10.1134/s1995080223120041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hou, Weiguang, Gang He, and Xinwen Liu. "Dynamic load balancing scheme on massive file transfer system." MATEC Web of Conferences 232 (2018): 04004. http://dx.doi.org/10.1051/matecconf/201823204004.

Full text
Abstract:
In this paper, a dynamic load balancing scheme applied to massive file transfer system is proposed. The scheme is designed to load balance FTP server cluster. Instead of recording connection number, runtime load information of each server is periodically collected and used in combination of static performance parameters collected on server startup to calculate the weight of servers. Improved Weighted Round-Robin algorithm is adopted in this scheme. Importantly, the weight of each server is initialized with static performance parameters and dynamically modified according to the runtime load. Ap
APA, Harvard, Vancouver, ISO, and other styles
29

Ali, Rabei Raad, Najwan Zuhair Waisi, Yahya Younis Saeed, Mohammed S. Noori, and Eko Hari Rachmawanto. "Intelligent Classification of JPEG files by Support Vector Machines with Content-based Feature Extraction." Journal of Intelligent Systems and Internet of Things 11, no. 1 (2024): 01–11. http://dx.doi.org/10.54216/jisiot.110101.

Full text
Abstract:
Nowadays, multimedia files play a basic role in supporting evidence analysis for making decisions about a crime through looking at files as a digital guide or evidence. Multimedia files such as JPG images are a common format because many documents and memorial images on laptops are valuable. In addition, many JPG images on Laptops are valuable and have fewer structure contents, making recovery possible when their file system is missing. However, intelligent systems for fully recovering corrupted JPG images into their original form is a challenging research issue. In this research, a support ve
APA, Harvard, Vancouver, ISO, and other styles
30

Umam, Chaerul, L. Budi Handoko, and Ghulam Maulana Rizqi. "Implementation And Analysis High Availability Network File System Based Server Cluster." Jurnal Transformatika 16, no. 1 (2018): 31. http://dx.doi.org/10.26623/transformatika.v16i1.841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kim, Youngchul, Cheiyol Kim, Sangmin Lee, and Youngkyun Kim. "Design and Implementation of Inline Data Deduplication in Cluster File System." KIISE Transactions on Computing Practices 22, no. 8 (2016): 369–74. http://dx.doi.org/10.5626/ktcp.2016.22.8.369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Julia, Myint1 and Thinn Thu Naing2. "MANAGEMENT OF DATA REPLICATION FOR PC CLUSTER BASED CLOUD STORAGE SYSTEM." International Journal on Cloud Computing: Services and Architecture(IJCCSA) 1, November (2018): 01–11. https://doi.org/10.5281/zenodo.1449827.

Full text
Abstract:
Storage systems are essential building blocks for cloud computing infrastructures. Although high performance storage servers are the ultimate solution for cloud storage, the implementation of inexpensive storage system remains an open issue. To address this problem, the efficient cloud storage system is implemented with inexpensive and commodity computer nodes that are organized into PC cluster based datacenter. Hadoop Distributed File System (HDFS) is an open source cloud based storage platform and designed to be deployed in low-cost hardware. PC Cluster based Cloud Storage System is implemen
APA, Harvard, Vancouver, ISO, and other styles
33

Achandair, O., S. Bourekkadi, E. Elmahouti, S. Khoulji, and M. L. Kerkeb. "solution for the future: small file management by optimizing Hadoop." International Journal of Engineering & Technology 7, no. 2.6 (2018): 221. http://dx.doi.org/10.14419/ijet.v7i2.6.10773.

Full text
Abstract:
Hadoop Distributed File System (HDFS) is designed to reliably store very large files across machines in a large cluster. It is one of the most used distributed file systems and offer a high availability and scalability on low-cost hardware. All Hadoopframework have HDFS as their storage component. Coupled with map reduce, which is the processing component, HDFS and Map Reduce (a processing component) have become the standard platforms for any management of big data in these days. HDFS however, in terms of design has the ability to handle huge numbers of large files, but when it comes to its de
APA, Harvard, Vancouver, ISO, and other styles
34

Sun, Qiu Dong, Jian Cun Zuo, Yu Feng Shao, and Lin Gui. "Special Database System Design Using Sector Organization and Server Clustering Techniques." Advanced Materials Research 915-916 (April 2014): 1377–81. http://dx.doi.org/10.4028/www.scientific.net/amr.915-916.1377.

Full text
Abstract:
In order to reform the shortcomings of common database with a slower access speed and lower security level, this paper applied sector operating directly instead of general file access, and used the distributed computing and clustering techniques to form an information server cluster as the special database system. Firstly, the layout and sector segmentation methods were provided for data access in sector based database. And then some management methods were given to control information servers in the cluster. Finally, to more efficiently schedule the tasks for storing data and querying informa
APA, Harvard, Vancouver, ISO, and other styles
35

E. Laxmi Lydia, Dr, and M. Srinivasa Rao. "Applying compression algorithms on hadoop cluster implementing through apache tez and hadoop mapreduce." International Journal of Engineering & Technology 7, no. 2.26 (2018): 80. http://dx.doi.org/10.14419/ijet.v7i2.26.12539.

Full text
Abstract:
The latest and famous subject all over the cloud research area is Big Data; its main appearances are volume, velocity and variety. The characteristics are difficult to manage through traditional software and their various available methodologies. To manage the data which is occurring from various domains of big data are handled through Hadoop, which is open framework software which is mainly developed to provide solutions. Handling of big data analytics is done through Hadoop Map Reduce framework and it is the key engine of hadoop cluster and it is extensively used in these days. It uses batch
APA, Harvard, Vancouver, ISO, and other styles
36

., Sheetu Sharma. "DESIGN OF FILE SYSTEM ARCHITECTURE WITH CLUSTER FORMATION ALONG WITH MOUNT TABLE." International Journal of Research in Engineering and Technology 03, no. 06 (2014): 418–22. http://dx.doi.org/10.15623/ijret.2014.0306077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

IEVLEV, K. O., and M. G. GORODNICHEV. "COMPARATIVE ANALYSIS OF HDFS AND APACHE OZONE DATA STORAGE SYSTEMS." Computational Nanotechnology 12, no. 1 (2025): 26–33. https://doi.org/10.33693/2313-223x-2025-12-1-26-33.

Full text
Abstract:
Over the last few decades, both the volume of digital data in the globe and the variety of ways to use it have increased dramatically. For a long time, the Hadoop ecosystem, which is still widely utilized, has been synonymous with large data storage and processing platforms. However, during the past 20 years, Hadoop has been found to have a number of serious flaws, including the “small files problem” and uneven cluster resource usage. Various commercial and research organizations are faced with the issue of upgrading the data stack to improve resource utilization and increasing data processing
APA, Harvard, Vancouver, ISO, and other styles
38

Jiang, Xiaowei, Chaoqi Guo, Qingbao Hu, Ran Du, Jingyan Shi, and Gongxing Sun. "Using Kerberos Tokens in Distributed Computing System at IHEP." EPJ Web of Conferences 295 (2024): 04052. http://dx.doi.org/10.1051/epjconf/202429504052.

Full text
Abstract:
The token-based certification method is spreading in the distributed computing system of high energy physics. More and more software and middleware are supporting tokens as one of the certification methods. As an example, WLCG has upgraded all the services to support WLCG tokens [1]. In IHEP (Institute of High Energy Physics in China), the Kerberos [2] token has been used as the main certification method in the local cluster. Naturally, it is selected as the certification method in the distributed computing system. In this case, a set of toolkits were developed or introduced to use Kerberos to
APA, Harvard, Vancouver, ISO, and other styles
39

Valenzuela, Andrea, and Jakob Blomer. "CernVM-FS ephemeral publishers on Kubernetes." Journal of Physics: Conference Series 2438, no. 1 (2023): 012014. http://dx.doi.org/10.1088/1742-6596/2438/1/012014.

Full text
Abstract:
Abstract The CernVM File System (CernVM-FS) is a global read-only POSIX file system that provides scalable and reliable software distribution to numerous scientific collaborations. It gives access to more than a billion binary files of experiment application software stacks and operating system containers to end user devices, grids, clouds, and supercomputers. CernVM-FS is asymmetric by construction. Writing into the repository is a centralized operation called publishing, while reading is allowed for many clients from many locations. The classic publishing process needs a dedicated “release m
APA, Harvard, Vancouver, ISO, and other styles
40

R., Yasir Abdullah, Mary Posonia A., and Barakkath Nisha U. "A Graph Correlated Anomaly Detection with Fuzzy Model for Distributed Wireless Sensor Networks." International Journal of Electrical and Electronic Engineering & Telecommunications 12, no. 5 (2023): 306–16. http://dx.doi.org/10.18178/ijeetc.12.5.306-316.

Full text
Abstract:
Wireless sensor networks have limited power for processing data, storage, and communication. Due to power shortages and anonymous attacks, sensor nodes may produce faulty or anomaly data which affects the accuracy of the entire system. Effective anomaly detection is essential to make an accurate prediction of the result. Moreover, clustering-based anomaly detection reduces energy consumption by avoiding individual sensory data reporting to the base station. The proposed methodology consists of two phases: Correlated graph clustering, and anomaly detection using a Fuzzy model. In the first phas
APA, Harvard, Vancouver, ISO, and other styles
41

Peltotalo, Jani, Jarmo Harju, Lassi Väätämöinen, Imed Bouazizi, and Igor D. D. Curcio. "RTSP-based Mobile Peer-to-Peer Streaming System." International Journal of Digital Multimedia Broadcasting 2010 (2010): 1–15. http://dx.doi.org/10.1155/2010/470813.

Full text
Abstract:
Peer-to-peer is emerging as a potentially disruptive technology for content distribution in the mobile Internet. In addition to the already well-known peer-to-peer file sharing, real-time peer-to-peer streaming is gaining popularity. This paper presents an effective real-time peer-to-peer streaming system for the mobile environment. The basis for the system is a scalable overlay network which groups peer into clusters according to their proximity using RTT values between peers as a criteria for the cluster selection. The actual media delivery in the system is implemented using the partial RTP
APA, Harvard, Vancouver, ISO, and other styles
42

Valverde Cameselle, Roberto, and Hugo Gonzalez Labrador. "Addressing a billion-entries multi-petabyte distributed file system backup problem with cback: from files to objects." EPJ Web of Conferences 251 (2021): 02071. http://dx.doi.org/10.1051/epjconf/202125102071.

Full text
Abstract:
CERNBox is the cloud collaboration hub at CERN. The service has more than 37,000 user accounts. The backup of user and project spaces data is critical for the service. The underlying storage system hosts over a billion files which amount to 12PB of storage distributed over thousands of disks with a tworeplica layout. Performing a backup operation over this vast amount of data and number of files is a non-trivial task. The original CERNBox backup system (an in-house event-driven file-level system) has been reconsidered and replaced by a new distributed and scalable backup infrastructure based o
APA, Harvard, Vancouver, ISO, and other styles
43

Bhathal, Gurjit Singh, and Amardeep Singh Dhiman. "Big Data Security Challenges and Solution of Distributed Computing in Hadoop Environment: A Security Framework." Recent Advances in Computer Science and Communications 13, no. 4 (2020): 790–97. http://dx.doi.org/10.2174/2213275912666190822095422.

Full text
Abstract:
Background: In current scenario of internet, large amounts of data are generated and processed. Hadoop framework is widely used to store and process big data in a highly distributed manner. It is argued that Hadoop Framework is not mature enough to deal with the current cyberattacks on the data. Objective: The main objective of the proposed work is to provide a complete security approach comprising of authorisation and authentication for the user and the Hadoop cluster nodes and to secure the data at rest as well as in transit. Methods: The proposed algorithm uses Kerberos network authenticati
APA, Harvard, Vancouver, ISO, and other styles
44

Qin, Ting, and Satoshi Fujita. "Automatic Tag Attachment Scheme based on Text Clustering for Efficient File Search in Unstructured Peer-to-Peer File Sharing Systems." JUCS - Journal of Universal Computer Science 18, no. (8) (2012): 1032–47. https://doi.org/10.3217/jucs-018-08-1032.

Full text
Abstract:
In this paper, the authors address the issue of automatic tag attachment to the documents distributed over a P2P network aiming at improving the efficiency of file search in such networks. The proposed scheme combines text clustering with a modified tag extraction algorithm, and is executed in a fully distributed manner. Meanwhile, the optimal cluster number can also be fixed automatically through a distance cost function. We have conducted experiments to evaluate the accuracy of the proposed scheme. The result of experiments indicates that the proposed approach is capable of making effective
APA, Harvard, Vancouver, ISO, and other styles
45

Ren, Yitong, Zhaojun Gu, Zhi Wang, et al. "System Log Detection Model Based on Conformal Prediction." Electronics 9, no. 2 (2020): 232. http://dx.doi.org/10.3390/electronics9020232.

Full text
Abstract:
With the rapid development of the Internet of Things, the combination of the Internet of Things with machine learning, Hadoop and other fields are current development trends. Hadoop Distributed File System (HDFS) is one of the core components of Hadoop, which is used to process files that are divided into data blocks distributed in the cluster. Once the distributed log data are abnormal, it will cause serious losses. When using machine learning algorithms for system log anomaly detection, the output of threshold-based classification models are only normal or abnormal simple predictions. This p
APA, Harvard, Vancouver, ISO, and other styles
46

Lee, Kyu-Woong. "Performance Evaluation of I/O Intensive Stress Test in Cluster File System SANiqueTM." Journal of the Korean Institute of Information and Communication Engineering 14, no. 2 (2010): 415–20. http://dx.doi.org/10.6109/jkiice.2010.14.2.415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Jang, Jun-Ho, Sae-Young Han, and Sung-Yong Park. "A Content-based Load Balancing Algorithm for Metadata Servers in Cluster File System." KIPS Transactions:PartA 13A, no. 4 (2006): 323–34. http://dx.doi.org/10.3745/kipsta.2006.13a.4.323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Zhi-gang, Bi-qing Zeng, Ce Xiong, Xiao-heng Deng, Zhi-wen Zeng, and An-feng Liu. "Heuristic file sorted assignment algorithm of parallel I/O on cluster computing system." Journal of Central South University of Technology 12, no. 5 (2005): 572–77. http://dx.doi.org/10.1007/s11771-005-0125-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Junwei. "CFS3M: A Cluster File System Scalable Model Based on Two-Dimension Service Separation." IEIT Journal of Adaptive and Dynamic Computing 2010, no. 1 (2010): 6. http://dx.doi.org/10.5813/www.ieit-web.org/ijadc/2010.1.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Fang, Hui, Jiandi Jiang, Feng Lin, and Wei Zhang. "Optimized Design of Multilines Center of Subway AFC System via Distributed File System and Bayesian Network Model." Journal of Sensors 2021 (December 20, 2021): 1–16. http://dx.doi.org/10.1155/2021/1500829.

Full text
Abstract:
Automatic fare collection system (AFCS) is a modern, automatic, networked toll collection system for rail transit ticket sales, collection, billing, charging, statistics, sorting, and management. To realize the subway transit networking operation, this paper designs the subway AFCS based on a distributed file system (DFS), namely, Gluster File System (GlusterFS). Firstly, the multiline center (MLC) in the subway AFCS is designed to analyze the status and current situation of distributed file processing in subway MLC system; secondly, the relevant technical theories are summarized, the Bayesian
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!