To see the other types of publications on this topic, follow the link: Data storage and transfer management.

Journal articles on the topic 'Data storage and transfer management'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data storage and transfer management.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Zhengchun, Rajkumar Kettimuthu, Joaquin Chung, Rachana Ananthakrishnan, Michael Link, and Ian Foster. "Design and Evaluation of a Simple Data Interface for Efficient Data Transfer across Diverse Storage." ACM Transactions on Modeling and Performance Evaluation of Computing Systems 6, no. 1 (June 2021): 1–25. http://dx.doi.org/10.1145/3452007.

Full text
Abstract:
Modern science and engineering computing environments often feature storage systems of different types, from parallel file systems in high-performance computing centers to object stores operated by cloud providers. To enable easy, reliable, secure, and performant data exchange among these different systems, we propose Connector, a plug-able data access architecture for diverse, distributed storage. By abstracting low-level storage system details, this abstraction permits a managed data transfer service (Globus, in our case) to interact with a large and easily extended set of storage systems. Equally important, it supports third-party transfers: that is, direct data transfers from source to destination that are initiated by a third-party client but do not engage that third party in the data path. The abstraction also enables management of transfers for performance optimization, error handling, and end-to-end integrity. We present the Connector design, describe implementations for different storage services, evaluate tradeoffs inherent in managed vs. direct transfers, motivate recommended deployment options, and propose a model-based method that allows for easy characterization of performance in different contexts without exhaustive benchmarking.
APA, Harvard, Vancouver, ISO, and other styles
2

Bockelman, Brian, Andrew Hanushevsky, Oliver Keeble, Mario Lassnig, Paul Millar, Derek Weitzel, and Wei Yang. "Bootstrapping a New LHC Data Transfer Ecosystem." EPJ Web of Conferences 214 (2019): 04045. http://dx.doi.org/10.1051/epjconf/201921404045.

Full text
Abstract:
GridFTP transfers and the corresponding Grid Security Infrastructure (GSI)-based authentication and authorization system have been data transfer pillars of the Worldwide LHC Computing Grid (WLCG) for more than a decade. However, in 2017, the end of support for the Globus Toolkit - the reference platform for these technologies - was announced. This has reinvigorated and expanded efforts to replace these pillars. We present an end-to-end alternate utilizing HTTP-based WebDAV as the transfer protocol, and bearer tokens for distributed authorization. This alternate ecosystem, integrating significant pre-existing work and ideas in the area, adheres to common industry standards to the fullest extent possible, with minimal agreed-upon extensions or common interpretations of the core protocols. The bearer token approach allows resource providers to delegate authorization decisions to the LHC experiments for experiment-dedicated storage areas. This demonstration touches the entirety of the stack - from multiple storage element implementations to FTS3 to the Rucio data management system. We show how the traditional production and user workflows can be reworked utilizing bearer tokens, eliminating the need for GSI proxy certificates for storage interactions.
APA, Harvard, Vancouver, ISO, and other styles
3

Abreu, Yamiel, Daniela Bauer, Simon Fayer, and Janusz Martyniak. "Data management for the SoLid experiment." EPJ Web of Conferences 214 (2019): 04040. http://dx.doi.org/10.1051/epjconf/201921404040.

Full text
Abstract:
The SoLid experiment is a short-baseline neutrino project located at the BR2 research reactor in Mol, Belgium. It started data taking in November 2017. Data management, including long term storage will be handled in close collaboration by Vrije Universiteit Brussels, Imperial College London and Rutherford Appleton Laboratory. We describe the SoLid data management model with an emphasis on the software developed for the file distribution on the Grid, the data archiving and the initial data transfer from the experiment to a Grid storage element. We present results from the first six months of data taking, showing how the system performed in a production setting.
APA, Harvard, Vancouver, ISO, and other styles
4

Ismagilova, Olga, and Karine Khadzhi. "Global Experience in Regulating Data Protection, Transfer and Storage." Economic Policy 15, no. 3 (June 2020): 152–75. http://dx.doi.org/10.18288/1994-5124-2020-3-152-175.

Full text
Abstract:
Cross-border data flows management and privacy protection are placed high in the international digital agenda due to unprecedented growth in the volume and pace of data collection, processing, storage and transfer globally. Despite the high importance of data flows regulation and its serious influence on all enterprises involved in digital economy, there is little research conducted in Russia and systemizing the national strategies in this sphere of regulation. The article provides an overview of the existing approaches of different countries to data protection, transfer (cross-border included) and storage, analyses the impact of regulation on international trade flows, and develops proposals for possible measures to reduce costs for companies in the digital age. The research discovers that today most countries of the world regulate personal data and other categories of sensitive data flows through the introduction of either a separate law or data protection provisions in the relevant sectoral laws. The countries’ approaches range from a complete ban on the cross-border transfer of all or certain categories of data to foreign countries to complete liberalization in this area. The most common approach is the introduction of one or several restrictions from the set of measures related to cross-border data transfers: data localization requirement; limitations on the number or type of countries to which sensitive data can be transferred without additional requirements; and the requirement of the personal data subject’s consent or responsible public authorities’ permission.
APA, Harvard, Vancouver, ISO, and other styles
5

Veseli, Siniša, Nicholas Schwarz, and Collin Schmitz. "APS Data Management System." Journal of Synchrotron Radiation 25, no. 5 (August 15, 2018): 1574–80. http://dx.doi.org/10.1107/s1600577518010056.

Full text
Abstract:
As the capabilities of modern X-ray detectors and acquisition technologies increase, so do the data rates and volumes produced at synchrotron beamlines. This brings into focus a number of challenges related to managing data at such facilities, including data transfer, near real-time data processing, automated processing pipelines, data storage, handling metadata and remote user access to data. The Advanced Photon Source Data Management System software is designed to help beamlines deal with these issues. This paper presents the system architecture and describes its components and functionality; the system's current usage is discussed, examples of its use have been provided and future development plans are outlined.
APA, Harvard, Vancouver, ISO, and other styles
6

Celesti, Antonio, Antonino Galletta, Maria Fazio, and Massimo Villari. "Towards Hybrid Multi-Cloud Storage Systems: Understanding How to Perform Data Transfer." Big Data Research 16 (July 2019): 1–17. http://dx.doi.org/10.1016/j.bdr.2019.02.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Moradi, Ramin, and Katrina Groth. "On the application of transfer learning in prognostics and health management." Annual Conference of the PHM Society 12, no. 1 (November 3, 2020): 8. http://dx.doi.org/10.36001/phmconf.2020.v12i1.1300.

Full text
Abstract:
Advancements in sensing and computing technologies, the development of human and computer interaction frameworks, big data storage capabilities, and the emergence of cloud storage and could computing have resulted in an abundance of data in modern industry. This data availability has encouraged researchers and industry practitioners to rely on data-based machine learning, specially deep learning, models for fault diagnostics and prognostics more than ever. These models provide unique advantages, however their performance is heavily dependent on the training data and how well that data represents the test data. This issue mandates fine-tuning and even training the models from scratch when there is a slight change in operating conditions or equipment. Transfer learning is an approach that can remedy this issue by keeping portions of what is learned from previous training and transferring them to the new application. In this paper, a unified definition for transfer learning and its different types is provided, Prognostics and Health Management (PHM) studies that have used transfer learning are reviewed in detail, and finally a discussion on TL application considerations and gaps is provided for improving the applicability of transfer learning in PHM.
APA, Harvard, Vancouver, ISO, and other styles
8

Ali, Tariq Emad, Ameer Hussein Morad, and Mohammed A. Abdala. "Traffic management inside software-defined data centre networking." Bulletin of Electrical Engineering and Informatics 9, no. 5 (October 1, 2020): 2045–54. http://dx.doi.org/10.11591/eei.v9i5.1928.

Full text
Abstract:
In recent years, data centre (DC) networks have improved their rapid exchanging abilities. Software-defined networking (SDN) is presented to alternate the impression of conventional networks by segregating the control plane from the SDN data plane. The SDN presented overcomes the limitations of traditional DC networks caused by the rapidly incrementing amounts of apps, websites, data storage needs, etc. Software-defined networking data centres (SDN-DC), based on the open-flow (OF) protocol, are used to achieve superior behaviour for executing traffic load-balancing (LB) jobs. The LB function divides the traffic-flow demands between the end devices to avoid links congestion. In short, SDN is proposed to manage more operative configurations, efficient enhancements and further elasticity to handle massive network schemes. In this paper the opendaylight controller (ODL-CO) with new version OF 1.4 protocol and the ant colony optimization algorithm is proposed to test the performance of the LB function using IPv6 in a SDN-DC network by studying the throughput, data transfer, bandwidth and average delay performance of the networking parameters before and after use of the LB algorithm. As a result, after applying the LB, the throughput, data transfer and bandwidth performance increased, while the average delay decreased.
APA, Harvard, Vancouver, ISO, and other styles
9

Lewis, Stuart, Lorraine Beard, Mary McDerby, Robin Taylor, Thomas Higgins, and Claire Knowles. "Developing a Data Vault." International Journal of Digital Curation 11, no. 1 (October 5, 2016): 86–95. http://dx.doi.org/10.2218/ijdc.v11i1.406.

Full text
Abstract:
Research data is being generated at an ever-increasing rate. This brings challenges in how to store, analyse, and care for the data. A component of this problem is the stewardship of data and associated files that need a safe and secure home for the medium to long-term. As part of typical suites of Research Data Management services, researchers are provided with large allocations of ‘active data storage’. This is often stored on expensive and fast disks to enable efficient transfer and working with large amounts of data. However, over time this active data store fills up, and researchers need a facility to move older but still valuable data to cheaper storage for long-term care. In addition, research funders are increasingly requiring data to be stored in forms that allow it to be described and retrieved in the future. For data that can’t be shared publicly in an open repository, a closed solution is required that can make use of offline or near-line storage for cost efficiency. This paper describes a solution to these requirements, called the Data Vault.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Qi, Zhi Jing Zhang, and Xin Jin. "The Research on Hydraulic Components Data Monitoring and Management System Based on Network Real-Time Monitoring." Applied Mechanics and Materials 457-458 (October 2013): 745–50. http://dx.doi.org/10.4028/www.scientific.net/amm.457-458.745.

Full text
Abstract:
in order to improve the speed and accuracy of monitoring data of hydraulic components,improve the method and efficiency of data storage management, hydraulic components data monitoring and management system based on network real-time monitoring were researched.system uses the ADLINK PCI-9114 data acquisition card collecting signal the sensor transferring,then transfer the signal to the computer on the scene.Temperature,pressure,flow and other signal data acquisition,data analysis,data storage and alarm functions were realized through the LabVIEW software.The system also achieved real-time remote monitoring by B/S structure.Compared with the existing hydraulic control system,the system integrated data acquisition,data analysis,data storage and alarm function and realized the remote real-time monitoring,also have a good interface of humanization design.So it can be applied to the field of hydraulic components data acquisition.
APA, Harvard, Vancouver, ISO, and other styles
11

Iordache, Cǎtǎlin, Ran Liu, Justas Balcas, Raimondas Šrivinskas, Yuanhao Wu, Chengyu Fan, Susmit Shannigrahi, Harvey Newman, and Edmund Yeh. "Named Data Networking based File Access for XRootD." EPJ Web of Conferences 245 (2020): 04018. http://dx.doi.org/10.1051/epjconf/202024504018.

Full text
Abstract:
We present the design and implementation of a Named Data Networking (NDN) based Open Storage System plug-in for XRootD. This is an important step towards integrating NDN, a leading future internet architecture, with the existing data management systems in CMS. This work outlines the first results of data transfer tests using internal as well as external 100 Gbps testbeds, and compares the NDN-based implementation with existing solutions.
APA, Harvard, Vancouver, ISO, and other styles
12

Obeidat, Bader, Lama Hashem, and Ra’ed Masa’deh. "The Influence of Knowledge Management Uses on Total Quality Management Practices in Commercial Banks of Jordan." Modern Applied Science 12, no. 11 (October 29, 2018): 1. http://dx.doi.org/10.5539/mas.v12n11p1.

Full text
Abstract:
This study examines the influence of knowledge management uses on total quality management practices in commercial banks of Jordan.A quantitative research design, using regression analysis was applied in this study and a total of 250 valid returns were obtained through a questionnaire distributed to the employees of commercial banks in Jordan. Knowledge management uses was adopted as an independent variable with four subgroups: knowledge acquisition, knowledge storage, knowledge transfer and knowledge application. Total quality management practices were adopted as dependent variable with five subgroups: top management support, employee's involvement, continuous improvement, customer focus, and data driven decision management.The results show that knowledge management uses significantly affects total quality management practices at three of its dimensions (knowledge acquisition, knowledge storage, and knowledge transfer) but it showed that no effect on knowledge application. The implications of this study are discussed at the end of this paper.
APA, Harvard, Vancouver, ISO, and other styles
13

Wu, Xiuguo. "Data Sets Replicas Placements Strategy from Cost-Effective View in the Cloud." Scientific Programming 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/1496714.

Full text
Abstract:
Replication technology is commonly used to improve data availability and reduce data access latency in the cloud storage system by providing users with different replicas of the same service. Most current approaches largely focus on system performance improvement, neglecting management cost in deciding replicas number and their store places, which cause great financial burden for cloud users because the cost for replicas storage and consistency maintenance may lead to high overhead with the number of new replicas increased in a pay-as-you-go paradigm. In this paper, towards achieving the approximate minimum data sets management cost benchmark in a practical manner, we propose a replicas placements strategy from cost-effective view with the premise that system performance meets requirements. Firstly, we design data sets management cost models, including storage cost and transfer cost. Secondly, we use the access frequency and the average response time to decide which data set should be replicated. Then, the method of calculating replicas’ number and their store places with minimum management cost is proposed based on location problem graph. Both the theoretical analysis and simulations have shown that the proposed strategy offers the benefits of lower management cost with fewer replicas.
APA, Harvard, Vancouver, ISO, and other styles
14

Kosar, Tevfik, Ismail Akturk, Mehmet Balman, and Xinqi Wang. "PetaShare: A Reliable, Efficient and Transparent Distributed Storage Management System." Scientific Programming 19, no. 1 (2011): 27–43. http://dx.doi.org/10.1155/2011/901230.

Full text
Abstract:
Modern collaborative science has placed increasing burden on data management infrastructure to handle the increasingly large data archives generated. Beside functionality, reliability and availability are also key factors in delivering a data management system that can efficiently and effectively meet the challenges posed and compounded by the unbounded increase in the size of data generated by scientific applications. We have developed a reliable and efficient distributed data storage system, PetaShare, which spans multiple institutions across the state of Louisiana. At the back-end, PetaShare provides a unified name space and efficient data movement across geographically distributed storage sites. At the front-end, it provides light-weight clients the enable easy, transparent and scalable access. In PetaShare, we have designed and implemented an asynchronously replicated multi-master metadata system for enhanced reliability and availability, and an advanced buffering system for improved data transfer performance. In this paper, we present the details of our design and implementation, show performance results, and describe our experience in developing a reliable and efficient distributed data management system for data-intensive science.
APA, Harvard, Vancouver, ISO, and other styles
15

Das, Moumita, Jack C. P. Cheng, and Kincho H. Law. "An ontology-based web service framework for construction supply chain collaboration and management." Engineering, Construction and Architectural Management 22, no. 5 (September 21, 2015): 551–72. http://dx.doi.org/10.1108/ecam-07-2014-0089.

Full text
Abstract:
Purpose – The purpose of this paper is to present a framework for integrating construction supply chain in order to resolve the data heterogeneity and data sharing problems in the construction industry. Design/methodology/approach – Standardized web service technology is used in the proposed framework for data specification, transfer, and integration. Open standard SAWSDL is used to annotate web service descriptions with pointers to concepts defined in ontologies. NoSQL database Cassandra is used for distributed data storage among construction supply chain stakeholders. Findings – Ontology can be used to support heterogeneous data transfer and integration through web services. Distributed data storage facilitates data sharing and enhances data control. Practical implications – This paper presents examples of two ontologies for expressing construction supply chain information – ontology for material and ontology for purchase order. An example scenario is presented to demonstrate the proposed web service framework for material procurement process involving three parties, namely, project manager, contractor, and material supplier. Originality/value – The use of web services is not new to construction supply chains (CSCs). However, it still faces problems in channelizing information along CSCs due to data heterogeneity. Trust issue is also a barrier to information sharing for integrating supply chains in a centralized collaboration system. In this paper, the authors present a web service framework, which facilitates storage and sharing of information on a distributed manner mediated through ontology-based web services. Security is enhanced with access control. A data model for the distributed databases is also presented for data storage and retrieval.
APA, Harvard, Vancouver, ISO, and other styles
16

Tian, Haishan, Fangfang Ju, Hongshan Nie, Qiong Yang, Yuanyu Wu, and Shuangjian Li. "Study on the file management method of data storage system for airborne radar." Royal Society Open Science 8, no. 6 (June 2021): 210221. http://dx.doi.org/10.1098/rsos.210221.

Full text
Abstract:
In order to solve the problem that the air-to-ground data transfer rate is much lower than the radar data rate, the onboard system is commonly used for storing the airborne radar data. However, there are two main problems in the data storage using the traditional file management method. The first is that the frequent data updating of the file allocation table (FAT) and the file directory table (FDT) cause a high frequency of address jumps among the discontinuous areas, which leads to a long response time. The second is that the updating frequencies of the FAT, the FDT and the data region are seriously inconsistent, which results in uneven wear of the three areas. To solve these two problems, a file management method, which optimizes the data writing in the three areas of the FAT, the FDT and the data region, is proposed in this study. An actual measurement is carried out on a data storage system of the airborne radar using the proposed file management method. The result shows that the proposed method significantly reduces the updating frequency of FAT and FDT, and achieves the wear levelling of file area and data region.
APA, Harvard, Vancouver, ISO, and other styles
17

Scale, Dale M. "Electronic Data Collection for Tomorrow's Forest." Forestry Chronicle 65, no. 5 (October 1, 1989): 370–71. http://dx.doi.org/10.5558/tfc65370-5.

Full text
Abstract:
The Fast Growing Forests Technology Development Group is committed to the development and transfer of forest management technology. To improve the efficiency of data collection and the integrity of the data collected, the group has implemented a system of electronic data collection utilizing the DAP Microflex and PC1000 hand-held units. Programs have been developed for applications such as field trial data collection, timber cruising the Larose Agreement Forest and Domtar hybrid poplar production plantation forest, stand marking, cold storage inventory, log scaling, plus tree collection, electronic weigh scales and other related forestry applications.
APA, Harvard, Vancouver, ISO, and other styles
18

Xie, Chuan. "Based on B/B/S Virtual Network Storage System Design and Implementation." Advanced Engineering Forum 6-7 (September 2012): 848–51. http://dx.doi.org/10.4028/www.scientific.net/aef.6-7.848.

Full text
Abstract:
In view of the network application on the network storage requirements proposed, based on the B/B/S network storage management system concept. The system can network on each kind of heterogeneous storage server organic organization, to provide users with extensible virtual storage space. The network user can access to any network terminal the virtual storage space, save data and realize the application reads the automatic transfer.
APA, Harvard, Vancouver, ISO, and other styles
19

Hashem, Lama, and Mais Jaradat. "The Impact of Knowledge Management Uses on Total Quality Management Practices: An Empirical Study on Commercial Banks of Jordan." Journal of Business & Management (COES&RJ-JBM) 8, no. 2 (April 1, 2020): 35. http://dx.doi.org/10.25255/2306.8043.2020.8.2.35.64.

Full text
Abstract:
This study tested the impact of knowledge management uses on total quality management practices in commercial banks of Jordan. A quantitative research design, using regression analysis was applied in this study and a total of 250 valid returns were obtained through a questionnaire distributed to the employees of commercial banks in Jordan. Knowledge management uses was adopted as an independent variable with four subgroups: knowledge acquisition, knowledge storage, knowledge transfer and knowledge application. Total quality management practices were adopted as dependent variable with five subgroups: top management support, employee's involvement, continuous improvement, customer focus, and data driven decision management. The results show that three of the knowledge management dimensions (i.e. knowledge acquisition, knowledge storage, and knowledge transfer) significantly affects total quality management practices. However, knowledge application showed insignificant effect. The theoretical and practical implications of this study are discussed at the end of this paper.
APA, Harvard, Vancouver, ISO, and other styles
20

Amović, Mladen, Miro Govedarica, Aleksandra Radulović, and Ivana Janković. "Big Data in Smart City: Management Challenges." Applied Sciences 11, no. 10 (May 17, 2021): 4557. http://dx.doi.org/10.3390/app11104557.

Full text
Abstract:
Smart cities use digital technologies such as cloud computing, Internet of Things, or open data in order to overcome limitations of traditional representation and exchange of geospatial data. This concept ensures a significant increase in the use of data to establish new services that contribute to better sustainable development and monitoring of all phenomena that occur in urban areas. The use of the modern geoinformation technologies, such as sensors for collecting different geospatial and related data, requires adequate storage options for further data analysis. In this paper, we suggest the biG dAta sMart cIty maNagEment SyStem (GAMINESS) that is based on the Apache Spark big data framework. The model of the GAMINESS management system is based on the principles of the big data modeling, which differs greatly from standard databases. This approach provides the ability to store and manage huge amounts of structured, semi-structured, and unstructured data in real time. System performance is increasing to a higher level by using the process parallelization explained through the five V principles of the big data paradigm. The existing solutions based on the five V principles are focused only on the data visualization, not the data themselves. Such solutions are often limited by different storage mechanisms and by the ability to perform complex analyses on large amounts of data with expected performance. The GAMINESS management system overcomes these disadvantages by conversion of smart city data to a big data structure without limitations related to data formats or use standards. The suggested model contains two components: a geospatial component and a sensor component that are based on the CityGML and the SensorThings standards. The developed model has the ability to exchange data regardless of the used standard or the data format into proposed Apache Spark data framework schema. The verification of the proposed model is done within the case study for the part of the city of Novi Sad.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Xu. "Research on Methods to Accelerate the Speed of Data Input Based on Library Computer Management System." Applied Mechanics and Materials 687-691 (November 2014): 1724–27. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1724.

Full text
Abstract:
With the development of network technology, communication technology and computer technology, the library circle has raised the climax of building the digital library. In this paper, it is necessary to establish human-machine interactive library management information system based on information technology, computer technology, networking technology to combine information, management with system, and to make library function maximal, service optimal and management canonical. Speed up data entry is one of the core aspects, and rapid data input is the main function of the library management system. After discussing the principles and methods of data input, OPAC systems and bar code technology can satisfy the requirements. Data input and output: the library can easily achieve all business input and output data; data storage and transfer: The database structure is reasonable, rational distribution, and store a variety of data makes the library all business data secure storage and mobile, highly centralized data management and sharing.
APA, Harvard, Vancouver, ISO, and other styles
22

Chard, Kyle, Eli Dart, Ian Foster, David Shifflett, Steven Tuecke, and Jason Williams. "The Modern Research Data Portal: a design pattern for networked, data-intensive science." PeerJ Computer Science 4 (January 15, 2018): e144. http://dx.doi.org/10.7717/peerj-cs.144.

Full text
Abstract:
We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance data enclaves and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe how to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Jin, Songqi Wu, Yundan Yang, Fenghui Duan, Hui Lu, and Yueming Lu. "Controlled Sharing Mechanism of Data Based on the Consortium Blockchain." Security and Communication Networks 2021 (March 20, 2021): 1–10. http://dx.doi.org/10.1155/2021/5523489.

Full text
Abstract:
In the process of sharing data, the costless replication of electric energy data leads to the problem of uncontrolled data and the difficulty of third-party access verification. This paper proposes a controlled sharing mechanism of data based on the consortium blockchain. The data flow range is controlled by the data isolation mechanism between channels provided by the consortium blockchain by constructing a data storage consortium chain to achieve trusted data storage, combining attribute-based encryption to complete data access control and meet the demands for granular data accessibility control and secure sharing; the data flow transfer ledger is built to record the original data life cycle management and effectively record the data transfer process of each data controller. Taking the application scenario of electric energy data sharing as an example, the scheme is designed and simulated on the Linux system and Hyperledger Fabric. Experimental results have verified that the mechanism can effectively control the scope of access to electrical energy data and realize the control of the data by the data owner.
APA, Harvard, Vancouver, ISO, and other styles
24

Hu, Hao, Fazhi Qi, Hongmei Zhang, Haolai Tian, and Qi Luo. "The design of a data management system at HEPS." Journal of Synchrotron Radiation 28, no. 1 (January 1, 2021): 169–75. http://dx.doi.org/10.1107/s1600577520015167.

Full text
Abstract:
According to the estimated data rates, it is predicted that 24 PB raw experimental data will be produced per month from 14 beamlines at the first stage of the High-Energy Photon Source (HEPS) in China, and the volume of experimental data will be even greater with the completion of over 90 beamlines at the second stage in the future. To make sure that the huge amount of data collected at HEPS is accurate, available and accessible, an effective data management system (DMS) is crucial for deploying the IT systems. In this article, a DMS is designed for HEPS which is responsible for automating the organization, transfer, storage, distribution and sharing of the data produced from experiments. First, the general situation of HEPS is introduced. Second, the architecture and data flow of the HEPS DMS are described from the perspective of facility users and IT, and the key techniques implemented in this system are introduced. Finally, the progress and the effect of the DMS deployed as a testbed at beamline 1W1A of the Beijing Synchrotron Radiation Facility are shown.
APA, Harvard, Vancouver, ISO, and other styles
25

Jalasri, M., S. Nalini, N. Magesh Kumar, and J. Elumalai. "Data Storage in the Fog Computing for Smart Environment Monitoring System (SEMS)." Journal of Computational and Theoretical Nanoscience 16, no. 8 (August 1, 2019): 3196–200. http://dx.doi.org/10.1166/jctn.2019.8160.

Full text
Abstract:
Environment monitoring system for smart cities uses diverse kind of sensors which is used to accumulate the information for managing the resources efficiently. Environment monitoring system provides services such as automation of home, weather monitoring, air quality management and prediction of pollution. This paper presents the customized design on environment monitoring the basic parameters are temperature, humidity and CO2. These sensed data need to be stored and processed. In previous system, sensed data are stored using cloud computing. In proposed system, Fog computing is used to store the sensed data from smart environment monitoring system (SEMS) and transfer the data to the mobile app from the fog device which is more efficient than cloud computing.
APA, Harvard, Vancouver, ISO, and other styles
26

Courtney, S. B., M. J. Tricard, and R. W. Hendricks. "PC-Based Management and Analysis of X-Ray Residual Stress Data." Advances in X-ray Analysis 36 (1992): 535–41. http://dx.doi.org/10.1154/s0376030800019169.

Full text
Abstract:
AbstractThe authors have developed two independent software packages that store x-ray peak locations, integrated intensities and full-width half-maximum intensity data as a function of diffractometer tilt and orientation angle; this information is used to compute residual stress tensor values. Each program retrieves the fitted x-ray peak locations from a dBASE-compatible data set that is independent of both x-ray diffractometer and acquisition software. Machine-specific routines have been coded to transfer peak data and general diffraction setup information from several different x-ray acquisition platforms into this common format. The two database management programs provide stand-alone storage, retrieval, analysis and graphic output of data, and thus have become practical laboratory vehicles toward establishing a standard database format for storing x-ray strain measurements and the residual stress values calculated therefrom.
APA, Harvard, Vancouver, ISO, and other styles
27

Chandawarkar, Rajiv, and Prakash Nadkarni. "Safe clinical photography: best practice guidelines for risk management and mitigation." Archives of Plastic Surgery 48, no. 3 (May 15, 2021): 295–304. http://dx.doi.org/10.5999/aps.2021.00262.

Full text
Abstract:
Clinical photography is an essential component of patient care in plastic surgery. The use of unsecured smartphone cameras, digital cameras, social media, instant messaging, and commercially available cloud-based storage devices threatens patients’ data safety. This paper Identifies potential risks of clinical photography and heightens awareness of safe clinical photography. Specifically, we evaluated existing risk-mitigation strategies globally, comparing them to industry standards in similar settings, and formulated a framework for developing a risk-mitigation plan for avoiding data breaches by identifying the safest methods of picture taking, transfer to storage, retrieval, and use, both within and outside the organization. Since threats evolve constantly, the framework must evolve too. Based on a literature search of both PubMed and the web (via Google) with key phrases and child terms (for PubMed), the risks and consequences of data breaches in individual processes in clinical photography are identified. Current clinical-photography practices are described. Lastly, we evaluate current risk mitigation strategies for clinical photography by examining guidelines from professional organizations, governmental agencies, and non-healthcare industries. Combining lessons learned from the steps above into a comprehensive framework that could contribute to national/international guidelines on safe clinical photography, we provide recommendations for best practice guidelines. It is imperative that best practice guidelines for the simple, safe, and secure capture, transfer, storage, and retrieval of clinical photographs be co-developed through cooperative efforts between providers, hospital administrators, clinical informaticians, IT governance structures, and national professional organizations. This would significantly safeguard patient data security and provide the privacy that patients deserve and expect.
APA, Harvard, Vancouver, ISO, and other styles
28

Poskonin, Mikhail V., Andrey O. Kalinin, Igor V. Kovalev, and Mikhail V. Saramud. "Optimization of electronic document management systems by means of encoding and visualization of stored data in the integrated development environment." MATEC Web of Conferences 226 (2018): 04021. http://dx.doi.org/10.1051/matecconf/201822604021.

Full text
Abstract:
The article considers the issue of storage, cataloging and transfer of documentation within the electronic document management system (EDMS). Modern document management solutions can be considered as the intersection of traditional and digital technologies, in particular, the storage of software code on paper. The very storage of program code in paper version implies a large archive for data storage, which includes modifications when versions of the program code are updated. Graphical ways of storing large amounts of data have been known for a long time, but they usually use a long procedure for restoring the original information and exclude recovery from paper or are significantly limited in the amount of stored data. As one of the elements of the composite electronic document management system it is proposed to use the method of graphical encoding of information (QR-code generation technology), which will increase the resistance to unauthorized modification of the program code, simplify the procedure for storing, identifying and verifying information. As an elementary block of information, it is intended to use a separate software module responsible for one of the subprograms in the RTOS environment. The proposed algorithm for storing and transferring information implies its compression by software, encryption and transmission. The article compares various compression algorithms and their efficiency. The presence of a cataloguer in each graphic element allows for quick search, comparison and verification of blocks.
APA, Harvard, Vancouver, ISO, and other styles
29

Totterdale, Robert L. "Globalization and Data Privacy." International Journal of Information Security and Privacy 4, no. 2 (April 2010): 19–35. http://dx.doi.org/10.4018/jisp.2010040102.

Full text
Abstract:
Global organizations operate in multiple countries and are subject to both local and federal laws in each of the jurisdictions in which they conduct business. The collection, storage, processing, and transfer of data between countries or operating locations are often subject to a multitude of data privacy laws, regulations, and legal systems that are at times in conflict. Companies struggle to have the proper policies, processes, and technologies in place that will allow them to comply with a myriad of laws which are constantly changing. Using an established privacy management framework, this study provides a summary of major data privacy laws in the U.S., Europe, and India, and their implication for businesses. Additionally, in this paper, relationships between age, residence (country), attitudes and awareness of business rules and data privacy laws are explored for 331 business professionals located in the U.S and India.
APA, Harvard, Vancouver, ISO, and other styles
30

Guo, Lantu, Meiyu Wang, and Yun Lin. "Electromagnetic Environment Portrait Based on Big Data Mining." Wireless Communications and Mobile Computing 2021 (April 30, 2021): 1–13. http://dx.doi.org/10.1155/2021/5563271.

Full text
Abstract:
With the development of IoT in smart cities, the electromagnetic environment (EME) in cities is becoming more and more complex. A full understanding of the characteristics of past spectrum resource utilization is the key to improving the efficiency of spectrum management. In order to explore the characteristics of spectrum utilization more comprehensively, this paper designs an EME portrait model. By checking the statistical information of the spectrum data, including changes in the noise floor and channel utilization in each individual wireless service, the correlation between the spectrum and time or space of different channels and the information is merged into a high-dimensional model through consistency transformation to form the EME portrait. The portrait model is not only convenient for storage and retrieval but also beneficial for transfer and expansion, which will become an important foundation for intelligent electromagnetic spectrum management.
APA, Harvard, Vancouver, ISO, and other styles
31

Al-Museelem, Waleed, and Chun Lin Li. "Data Security and Data Privacy in Cloud Computing." Advanced Materials Research 905 (April 2014): 687–92. http://dx.doi.org/10.4028/www.scientific.net/amr.905.687.

Full text
Abstract:
Cloud computing has led to the development of IT to more sophisticated levels by improving the capacity and flexibility of data storage and by providing a scalable computation and processing power which matches the dynamic data requirements. Cloud computing has many benefits which has led to the transfer of many enterprise applications and data to public and hybrid clouds. However, many organizations refer to the protection of privacy and the security of data as the major issues which prevent them from adopting cloud computing. The only way successful implementation of clouds can be achieved is through effective enhancement and management of data security and privacy in clouds. This research paper analyzes the privacy and protection of data in cloud computing through all data lifecycle stages providing an overall perspective of cloud computing while highlighting key security issues and concerns which should be addressed. It also discusses several current solutions and further proposes more solutions which can enhance the privacy and security of data in clouds. Finally, the research paper describes future research work on the protection of data privacy and security in clouds.
APA, Harvard, Vancouver, ISO, and other styles
32

Garg, Anshita. "Searching using B/B+ Tree in Database Management System." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (July 31, 2021): 3702–6. http://dx.doi.org/10.22214/ijraset.2021.36971.

Full text
Abstract:
This is a research-based project and the basic point motivating this project is learning and implementing algorithms that reduce time and space complexity. In the first part of the project, we reduce the time taken to search a given record by using a B/B+ tree rather than indexing and traditional sequential access. It is concluded that disk-access times are much slower than main memory access times. Typical seek times and rotational delays are of the order of 5 to 6 milliseconds and typical data transfer rates are of the range of 5 to 10 million bytes per second and therefore, main memory access times are likely to be at least 4 or 5 orders of magnitude faster than disk access on any given system. Therefore, the objective is to minimize the number of disk accesses, and thus, this project is concerned with techniques for achieving that objective i.e. techniques for arranging the data on a disk so that any required piece of data, say some specific record, can be located in a few I/O’s as possible. In the second part of the project, Dynamic Programming problems were solved with Recursion, Recursion With Storage, Iteration with Storage, Iteration with Smaller Storage. The problems which have been solved in these 4 variations are Fibonacci, Count Maze Path, Count Board Path, and Longest Common Subsequence. All 4 variations are an improvement over one another and thus time and space complexity are reduced significantly as we go from Recursion to Iteration with Smaller Storage.
APA, Harvard, Vancouver, ISO, and other styles
33

Naresh, V., M. Anudeep, M. Saipraneeth, A. Saikumar Reddy, and V. Navya. "Encryption-Based Secure and Efficient Access Control to Out Sourced Data in Cloud Computing." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 315. http://dx.doi.org/10.14419/ijet.v7i2.32.15703.

Full text
Abstract:
The cloud stockpiling framework, comprising of capacity servers, gives long haul stockpiling administrations on the Internet. Maintaining the data in the cloud computing of third parties generates: serious concern about the confidentiality of data and the reduction of data management costs. Nonetheless, we should give security certifications to outside information. We plan and actualize a protected cloud stockpiling framework that gives secure, secure and available record security for document administration and secure information exchange. It includes foreign files with a file access policy, possibly deleting files, to avoid being denied to anyone with a file access policy. To achieve these security objectives, a set of password keys is implemented that maintain a host (s) or head (s) separately. We offer a twofold edge intermediary coding plan and incorporate it with a decentralized disposal code, which is detailed with a safely Cloud storage framework. The Cloud storage system not only provides a secure and stable search and storage of data, but also allows the user to transfer their data to the user of the backup to another user without the data being returned.
APA, Harvard, Vancouver, ISO, and other styles
34

Gu, Junmin, Dimitrios Katramatos, Xin Liu, Vijaya Natarajan, Arie Shoshani, Alex Sim, Dantong Yu, Scott Bradley, and Shawn McKee. "StorNet: Integrated Dynamic Storage and Network Resource Provisioning and Management for Automated Data Transfers." Journal of Physics: Conference Series 331, no. 1 (December 23, 2011): 012002. http://dx.doi.org/10.1088/1742-6596/331/1/012002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Trofimov, Ivan, Leonid Trofimov, Sergei Podkovalnikov, Lyudmila Chudinova, Lev Belyaev, and Vladimir Savelév. "The computing and information system for research of prospective electric power grids expansion." Yugoslav Journal of Operations Research 29, no. 4 (2019): 465–81. http://dx.doi.org/10.2298/yjor181115021t.

Full text
Abstract:
The paper describes the software tool implemented by Melentiev Energy Systems Institute SB RAS, aimed to solve wide range of energy issues. In this article, the Computing and Information System (CIS) means a software tool that provides collection, transfer, processing, storage, geo-visualization, and output of digital technical and economic data of different energy/power entities. Besides, this tool is incorporated within a mathematical model for optimization of expansion and operating modes of power systems. The paper discusses the example of how data storage and data representation in object-oriented database assist to improve efficiency of research prospective electric power systems expansion and operation.
APA, Harvard, Vancouver, ISO, and other styles
36

Sokol, Volodymyr, Mariia Bilova, and Oleksii Kosmachov. "Typical functionality, application and deployment specifics of knowledge management systems in IT companies." Vìsnik Nacìonalʹnogo unìversitetu "Lʹvìvsʹka polìtehnìka". Serìâ Ìnformacìjnì sistemi ta merežì 8 (December 5, 2020): 45–54. http://dx.doi.org/10.23939/sisn2020.08.045.

Full text
Abstract:
The work is focused on features of knowledge storage and its reuse in companies whose activities are related to software development. The concepts of knowledge, knowledge management and knowledge management systems from the standpoint of their usage in an IT company are given. It is determined that organizational knowledge is divided into explicit, which can be presented in the form of a letter, instructions, reference book, etc., and implicit, which exists only in an employee's mind directly and cannot be easily extracted. The main goal of knowledge management is formed, which is the organization of processes of creation, storage, acquisition, transfer and application of knowledge. The main strategies of knowledge are described, including the creation of knowledge, storage and retrieval of knowledge, transfer and exchange of knowledge, application of knowledge, examples of their use in software development. The characteristics of the knowledge management system is given as an information system that is designed to improve the efficiency of knowledge management of the organization. This system allows to solve problems related to the variety of software projects in which the IT company is involved. The main structural components and functions of knowledge management systems are identified, which include search tools, content and interaction management tools, data storage tools and mining tools, as well as group and artificial intelligence tools. The features of the usage and implementation of knowledge management systems at the work of small and medium-sized IT companies on the example of the Academy Smart Ltd are analyzed. The emerging issues of implementation of the system and success factors are considered. The features of the knowledge management process in Academy Smart Ltd is given, conclusions about efficiency of this organization are made, according to what the directions of the further researches are formed.
APA, Harvard, Vancouver, ISO, and other styles
37

Marinelli, Martina, Vincenzo Positano, Valentina Lorenzoni, Chiara Caselli, Maurizio Mangione, Paolo Marcheschi, Stefano Puzzuoli, Natalia Esposito, Giuseppe Andrea L’Abbate, and Danilo Neglia. "A modular informatics platform for effective support of collaborative and multicenter studies in cardiology." Health Informatics Journal 22, no. 4 (July 26, 2016): 1083–100. http://dx.doi.org/10.1177/1460458215609743.

Full text
Abstract:
Collaborative and multicenter studies permit a large number of patients to be enrolled within a reasonable time and providing the opportunity to collect different data. Informatics platforms play an important role in management, storage, and exchange of data between the participants involved in the study. In this article, we describe a modular informatics platform designed and developed to support collaborative and multicenter studies in cardiology. In each developed module, data management is implemented following local defined protocols. The modular characteristic of the developed platform allows independent transfer of different kinds of data, such as biological samples, imaging raw data, and patients’ digital information. Moreover, it offers safe central storage of the data collected during the study. The developed platform was successfully tested during a European collaborative and multicenter study, focused on evaluating multimodal non-invasive imaging to diagnose and characterize ischemic heart disease.
APA, Harvard, Vancouver, ISO, and other styles
38

Raharja, Bayu Dwi. "PENERAPAN DISCRETE COSINE TRANSFORM (DCT) TERHADAP KOMPRESI CITRA DIGITAL." Indonesian Journal of Business Intelligence (IJUBI) 4, no. 1 (June 27, 2021): 31. http://dx.doi.org/10.21927/ijubi.v4i1.1790.

Full text
Abstract:
<p>Image compression is a process that can reduce image size. In general, there are two types, lossless compression and loss compression. The method usually used in the JPEG standard is lossy compression, which removes some of the image information and takes advantage of the weakness in the insensitivity of the human eye in recognizing color gradations. With such large storage, data transfer or exchange is also increasingly difficult, especially data exchange using small storage areas. Tools that usually limit the size of storage for exchanging data. From the existing results using the DCT method is able to compress files up to 96%. Can be seen from the overall file average up to 74%</p>
APA, Harvard, Vancouver, ISO, and other styles
39

Wei, Xiong, Zhi Ying Wang, and Min Hua Jiang. "A Research on Active Storage Task Allocation Strategy Based on MMC." Applied Mechanics and Materials 536-537 (April 2014): 562–65. http://dx.doi.org/10.4028/www.scientific.net/amm.536-537.562.

Full text
Abstract:
the processing capacity of a server is unable to satisfy the parallel processing of mass data and also improves data transmission in storage system. This paper has adopted active storage system to transfer calculation task and management ability to data end, improving the parallel processing capability of application program and reducing the data transmission rate in the system. During processing arriving process, this research has complied with Poisson flow, and the service time for each processor is negative exponential distribution and task service rule is the mixing model of “first come, first served”. Through experiment, it is proved that the data transmission rate within MMC system is reduced by 15% than random allocation of task on average and the system speed-up ratio is improved by 2.1 than average.
APA, Harvard, Vancouver, ISO, and other styles
40

Saleh, Alireza, Reza Javidan, and Mohammad Taghi FatehiKhajeh. "A four-phase data replication algorithm for data grid." Journal of Advanced Computer Science & Technology 4, no. 1 (April 12, 2015): 163. http://dx.doi.org/10.14419/jacst.v4i1.4009.

Full text
Abstract:
<p>Nowadays, scientific applications generate a huge amount of data in terabytes or petabytes. Data grids currently proposed solutions to large scale data management problems including efficient file transfer and replication. Data is typically replicated in a Data Grid to improve the job response time and data availability. A reasonable number and right locations for replicas has become a challenge in the Data Grid. In this paper, a four-phase dynamic data replication algorithm based on Temporal and Geographical locality is proposed. It includes: 1) evaluating and identifying the popular data and triggering a replication operation when the popularity data passes a dynamic threshold; 2) analyzing and modeling the relationship between system availability and the number of replicas, and calculating a suitable number of new replicas; 3) evaluating and identifying the popular data in each site, and placing replicas among them; 4) removing files with least cost of average access time when encountering insufficient space for replication. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid Projects. The simulation results show that the proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, effective network usage and percentage of storage filled.</p>
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Zhe, Yun Liang, Jian Wei Ma, and Ping Zhang. "The Construction and Practice of Energy Consumption Management System for Coal-Fired Power Plant." Applied Mechanics and Materials 448-453 (October 2013): 2781–85. http://dx.doi.org/10.4028/www.scientific.net/amm.448-453.2781.

Full text
Abstract:
Electric power industry is of great potential on energy saving and emission reduction. Remote monitoring and analyzing on the energy consumption of coal-fired units is important methods and basis for energy saving. The system was developed a data acquisition smart device to acquire the energy consumption parameters, designed the cogeneration units "exceed power" algorithm and the energy consumption general model. The system satisfies the industrial requirements of accurate and reliable data transfer and storage and effectively enhances the rapid modeling capabilities, so as to provide technical support for the energy saving and emission reduction works.
APA, Harvard, Vancouver, ISO, and other styles
42

Dona, Rizart, and Riccardo Di Maria. "The ESCAPE Data Lake: The machinery behind testing, monitoring and supporting a unified federated storage infrastructure of the exabyte-scale." EPJ Web of Conferences 251 (2021): 02060. http://dx.doi.org/10.1051/epjconf/202125102060.

Full text
Abstract:
The EU-funded ESCAPE project aims at enabling a prototype federated storage infrastructure, a Data Lake, that would handle data on the exabyte-scale, address the FAIR data management principles and provide science projects a unified scalable data management solution for accessing and analyzing large volumes of scientific data. In this respect, data transfer and management technologies such as Rucio, FTS and GFAL are employed along with monitoring enabling solutions such as Grafana, Elasticsearch and perf- SONAR. This paper presents and describes the technical details behind the machinery of testing and monitoring of the Data Lake – this includes continuous automated functional testing, network monitoring and development of insightful visualizations that reflect the current state of the system. Topics that are also addressed include the integration with the CRIC information system as well as the initial support for token based authentication / authorization by using OpenID Connect. The current architecture of these components is provided and future enhancements are discussed.
APA, Harvard, Vancouver, ISO, and other styles
43

Lachmann, Malin, Jaime Maldonado, Wiebke Bergmann, Francesca Jung, Markus Weber, and Christof Büskens. "Self-Learning Data-Based Models as Basis of a Universally Applicable Energy Management System." Energies 13, no. 8 (April 21, 2020): 2084. http://dx.doi.org/10.3390/en13082084.

Full text
Abstract:
In the transfer from fossil fuels to renewable energies, grid operators, companies and farms develop an increasing interest in smart energy management systems which can reduce their energy expenses. This requires sufficiently detailed models of the underlying components and forecasts of generation and consumption over future time horizons. In this work, it is investigated via a real-world case study how data-based methods based on regression and clustering can be applied to this task, such that potentially extensive effort for physical modeling can be decreased. Models and automated update mechanisms are derived from measurement data for a photovoltaic plant, a heat pump, a battery storage, and a washing machine. A smart energy system is realized in a real household to exploit the resulting models for minimizing energy expenses via optimization of self-consumption. Experimental data are presented that illustrate the models’ performance in the real-world system. The study concludes that it is possible to build a smart adaptive forecast-based energy management system without expert knowledge of detailed physics of system components, but special care must be taken in several aspects of system design to avoid undesired effects which decrease the overall system performance.
APA, Harvard, Vancouver, ISO, and other styles
44

Tang, Jun, and Zhi Min Hu. "Study on Integrated Scientific Research Management Information System in Higher Vocational College Based on Workflow." Applied Mechanics and Materials 389 (August 2013): 908–12. http://dx.doi.org/10.4028/www.scientific.net/amm.389.908.

Full text
Abstract:
For the purpose to realize information collection, collation, transfer, tracking and execution required by Hunan Urban Construction College to carry out the function of scientific research and management, and through the re-definition of the workflow process definition, we combine the modeling technology of Petri net and the workflow technology to design the amendment of the workflow process and the storage mechanism. Also we design the data structure of the workflow engine, hoping to make breakthrough on both thoughts and measures for the realization of the research management & information system.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Yunfa, Yifei Tu, Jiawa Lu, and Yunchao Wang. "A Security Transmission and Storage Solution about Sensing Image for Blockchain in the Internet of Things." Sensors 20, no. 3 (February 9, 2020): 916. http://dx.doi.org/10.3390/s20030916.

Full text
Abstract:
With the rapid development of the Internet of Things (IoT), the number of IoT devices has increased exponentially. Therefore, we have put forward higher security requirements for the management, transmission, and storage of massive IoT data. However, during the transmission process of IoT data, security issues, such as data theft and forgery, are prone to occur. In addition, most existing data storage solutions are centralized, i.e., data are stored and maintained by a centralized server. Once the server is maliciously attacked, the security of IoT data will be greatly threatened. In view of the above-mentioned security issues, a security transmission and storage solution is proposed about sensing image for blockchain in the IoT. Firstly, this solution intelligently senses user image information, and divides these sensed data into intelligent blocks. Secondly, different blocks of data are encrypted and transmitted securely through intelligent encryption algorithms. Finally, signature verification and storage are performed through an intelligent verification algorithm. Compared with the traditional IoT data transmission and centralized storage solution, our solution combines the IoT with the blockchain, making use of the advantages of blockchain decentralization, high reliability, and low cost to transfer and store users image information securely. Security analysis proves that the solution can resist theft attacks and ensure the security of user image information during transmission and storage.
APA, Harvard, Vancouver, ISO, and other styles
46

El-Dalahmeh, Ma’d, Maher Al-Greer, Mo’ath El-Dalahmeh, and Michael Short. "Time-Frequency Image Analysis and Transfer Learning for Capacity Prediction of Lithium-Ion Batteries." Energies 13, no. 20 (October 19, 2020): 5447. http://dx.doi.org/10.3390/en13205447.

Full text
Abstract:
Energy storage is recognized as a key technology for enabling the transition to a low-carbon, sustainable future. Energy storage requires careful management, and capacity prediction of a lithium-ion battery (LIB) is an essential indicator in a battery management system for Electric Vehicles and Electricity Grid Management. However, present techniques for capacity prediction rely mainly on the quality of the features extracted from measured signals under strict operating conditions. To improve flexibility and accuracy, this paper introduces a new paradigm based on a multi-domain features time-frequency image (TFI) analysis and transfer deep learning algorithm, in order to extract diagnostic characteristics on the degradation inside the LIB. Continuous wavelet transform (CWT) is used to transfer the one-dimensional (1D) terminal voltage signals of the battery into 2D images (i.e., wavelet energy concentration). The generated TFIs are fed into the 2D deep learning algorithms to extract the features from the battery voltage images. The extracted features are then used to predict the capacity of the LIB. To validate the proposed technique, experimental data on LIB cells from the experimental datasets published by the Prognostics Center of Excellence (PCoE) NASA were used. The results show that the TFI analysis clearly visualised the degradation process of the battery due to its capability to extract different information on electrochemical features from the non-stationary and non-linear nature of the battery signal in both the time and frequency domains. AlexNet and VGG-16 transfer deep learning neural networks combined with stochastic gradient descent with momentum (SGDM) and adaptive data momentum (ADAM) optimization algorithms are examined to classify the obtained TFIs at different capacity values. The results reveal that the proposed scheme achieves 95.60% prediction accuracy, indicating good potential for the design of improved battery management systems.
APA, Harvard, Vancouver, ISO, and other styles
47

Nakayama, Hirofumi, Takayuki Shimaoka, Kiyoshi Omine, Maryono, Plubcharoensuk Patsaraporn, and Orawan Siriratpiriya. "Solid Waste Management in Bangkok at 2011 Thailand Floods." Journal of Disaster Research 8, no. 3 (June 1, 2013): 456–64. http://dx.doi.org/10.20965/jdr.2013.p0456.

Full text
Abstract:
A large amount of municipal and industrial flood waste was generated during a 2011 monsoon in Thailand. This paper examines the generation and disposal of flood waste related to Thailand floods using data obtained through field surveys and interviews with involved organizations. As a result, problems with flood waste treatment were found. These included a shortage of waste collection capacity such as vehicles and boats under emergency conditions, a lack of appropriately designed temporary waste storage at waste transfer stations, a lack of recycling systems for the wood waste that dominated waste from flooding, and the possibility thatmixed disposal ofmunicipal and industrial waste introduced contamination. To improve flood waste treatment, some proposals were provided for the predisaster, disaster and post-disaster stages.
APA, Harvard, Vancouver, ISO, and other styles
48

Delgado Peris, Antonio, José Flix Molina, José M. Hernández, Antonio Pérez-Calero Yzquierdo, Carlos Pérez Dengra, Elena Planas, Francisco Javier Rodríguez Calonge, and Anna Sikora. "CMS data access and usage studies at PIC Tier-1 and CIEMAT Tier-2." EPJ Web of Conferences 245 (2020): 04028. http://dx.doi.org/10.1051/epjconf/202024504028.

Full text
Abstract:
The current computing models from LHC experiments indicate that much larger resource increases would be required by the HL-LHC era (2026+) than those that technology evolution at a constant budget could bring. Since worldwide budget for computing is not expected to increase, many research activities have emerged to improve the performance of the LHC processing software applications, as well as to propose more efficient resource deployment scenarios and data management techniques, which might reduce this expected increase of resources. The massively increasing amounts of data to be processed leads to enormous challenges for HEP storage systems, networks and the data distribution to end-users. These challenges are particularly important in scenarios in which the LHC data would be distributed from small numbers of centers holding the experiment’s data. Enabling data locality relative to computing tasks via local caches on sites seems a very promising approach to hide transfer latencies while reducing the deployed storage space and number of replicas overall. However, this highly depends on the workflow I/O characteristics and available network across sites. A crucial assessment of how the experiments are accessing and using the storage services deployed in WLCG sites, to evaluate and simulate the benefits for several of the new emerging proposals within WLCG/HSF. Studies on access and usage of storage, data access and popularity studies for the CMS workflows executed in the Spanish Tier-1 (PIC) and Tier-2 (CIEMAT) sites supporting CMS activities are reviewed in this report, based on local and experiment monitoring data, spanning more than one year. This is of relevance for simulation of data caches for end-user analysis data, as well as identifying potential areas for storage savings.
APA, Harvard, Vancouver, ISO, and other styles
49

Chinnasamy, Pennan, and Revathi Ganapathy. "Long-term variations in water storage in Peninsular Malaysia." Journal of Hydroinformatics 20, no. 5 (November 7, 2017): 1180–90. http://dx.doi.org/10.2166/hydro.2017.043.

Full text
Abstract:
Abstract Information on ongoing climate change impacts on water availability is limited for Asian regions, particularly for Peninsular Malaysia. Annual flash floods are common during peak monsoon seasons, while the dry seasons are hit by droughts, leading to socio-economic stress. This study, for the first time, analyzed the long-term trends (14 years, from 2002 to 2014) in terrestrial water storage and groundwater storage for Peninsular Malaysia, using Gravity Recovery And Climate Experiment data. Results indicate a decline in net terrestrial and groundwater storage over the last decade. Spatially, the northern regions are more affected by droughts, while the southern regions have more flash floods. Groundwater storage trends show strong correlations to the monsoon seasons, indicating that most of the shallow aquifer groundwater is used. Results also indicate that, with proper planning and management, excess monsoon/flash flood water can be stored in water storage structures up to the order of 87 billion liters per year. This can help in dry season water distribution and water transfer projects. Findings from this study can expand the understanding of ongoing climate change impacts on groundwater storage and terrestrial water storage, and can lead to better management of water resources in Peninsular Malaysia.
APA, Harvard, Vancouver, ISO, and other styles
50

Nagornov, Stanislav, Maksim Levin, and Ekaterina Levina. "Concept of “smart” oil storage facility for agricultural purposes." BIO Web of Conferences 17 (2020): 00176. http://dx.doi.org/10.1051/bioconf/20201700176.

Full text
Abstract:
Technological parameters and technical level of the equipment at an oil storage facility influence motor fuel’s quality and its waste during reception, storage and transfer. The use of intelligent systems during the oil storage and handling process enhances quality preservation and reduction of motor fuel waste caused by evaporation, oxidation and hydration while stored in above-ground horizontal steel tanks. Systems managing “smart” oil-storage facilities combine technologies for on-line collection, transmission and storage of information with instant data processing and analysis, and managerial decision-making techniques. A methodological framework, that includes algorithms and a program with sensors to monitor indicators of an automated horizontal oil reservoir, has been developed to control the technological parameters (temperature, pressure, fuel level) of the tanks during storage of light oil products, and to protect fuel against flooding and evaporation. The application of the neural network forecasting technique for fuel waste from evaporation during storage, and processing of the data array, made it possible to calculate with a 98% accuracy rate the gasoline waste during storage in horizontal on-ground tanks with up to 100 m3 in volume capacity. The application of a neural network enables development of new fuel storage algorithms and calculation of the optimal storage amount to minimise losses. The concept and developed digital intelligent control solutions for oil storage allows combining data in oil management into a single information space, and to control the automated oil storage system with application of neural networks, deep learning and Big Data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography