To see the other types of publications on this topic, follow the link: Replication stre.

Journal articles on the topic 'Replication stre'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Replication stre.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Merritt, Ronald. "Utilizing the Generalized Linear Mixed Model for Specification and Simulation of Transient Vibration Environments." Journal of the IEST 53, no. 2 (2010): 35–49. http://dx.doi.org/10.17764/jiet.53.2.y7291022622225x3.

Full text
Abstract:
Transient vibration environments are an important consideration in qualification of aircraft store components — particularly for aircraft with internal storage bays. Generally, these transient vibration environments provide high stimulus input to a store via aerodynamic forces for up to 15 seconds on numerous occasions during training. With the recent introduction of the technique of Time Waveform Replication (TWR) to laboratory testing (MIL-STD-810G Method 525), store components can be readily tested to replications of field-measured transient vibration environments. This paper demonstrates the use of the Generalized Linear Mixed Model (GLMM) on a collection of measured field responses for specification of transient vibration environments. The paper establishes a basis for moving from (1) transient vibration measured field response to (2) transient vibration stochastic specification of the measured field response to (3) laboratory simulation of transient vibration environments.
APA, Harvard, Vancouver, ISO, and other styles
2

Compain, Fabrice, Lionel Frangeul, Laurence Drieux, et al. "Complete Nucleotide Sequence of Two Multidrug-Resistant IncR Plasmids from Klebsiella pneumoniae." Antimicrobial Agents and Chemotherapy 58, no. 7 (2014): 4207–10. http://dx.doi.org/10.1128/aac.02773-13.

Full text
Abstract:
ABSTRACTWe report here the complete nucleotide sequence of two IncR replicons encoding multidrug resistance determinants, including β-lactam (blaDHA-1,blaSHV-12), aminoglycoside (aphA1,strA,strB), and fluoroquinolone (qnrB4,aac6′-1b-cr) resistance genes. The plasmids have backbones that are similar to each other, including the replication and stability systems, and contain a wide variety of transposable elements carrying known antibiotic resistance genes. This study confirms the increasing clinical importance of IncR replicons as resistance gene carriers.
APA, Harvard, Vancouver, ISO, and other styles
3

Madi, Mohammed K., Yuhanis Yusof, and Suhaidi Hassan. "Replica Placement Strategy for Data Grid Environment." International Journal of Grid and High Performance Computing 5, no. 1 (2013): 70–81. http://dx.doi.org/10.4018/jghpc.2013010105.

Full text
Abstract:
Data Grid is an infrastructure that manages huge amount of data files, and provides intensive computational resources across geographically distributed collaboration. To increase resource availability and to ease resource sharing in such environment, there is a need for replication services. Data replication is one of the methods used to improve the performance of data access in distributed systems by replicating multiple copies of data files in the distributed sites. Replica placement mechanism is the process of identifying where to place copies of replicated data files in a Grid system. Existing work identifies the suitable sites based on number of requests and read cost of the required file. Such approaches consume large bandwidth and increases the computational time. The authors propose a replica placement strategy (RPS) that finds the best locations to store replicas based on four criteria, namely, 1) Read Cost, 2) File Transfer Time, 3) Sites’ Workload, and 4) Replication Sites. OptorSim is used to evaluate the performance of this replica placement strategy. The simulation results show that RPS requires less execution time and consumes less network usage compared to existing approaches of Simple Optimizer and LFU (Least Frequently Used).
APA, Harvard, Vancouver, ISO, and other styles
4

Fang, Kuo-Chi, Husnu S. Narman, Ibrahim Hussein Mwinyi, and Wook-Sung Yoo. "PPHA-Popularity Prediction Based High Data Availability for Multimedia Data Center." International Journal of Interdisciplinary Telecommunications and Networking 11, no. 1 (2019): 17–29. http://dx.doi.org/10.4018/ijitn.2019010102.

Full text
Abstract:
Due to the growth of internet-connected devices and extensive data analysis applications in recent years, cloud computing systems are largely utilized. Because of high utilization of cloud storage systems, the demand for data center management has been increased. There are several crucial requirements of data center management, such as increase data availability, enhance durability, and decrease latency. In previous works, a replication technique is mostly used to answer those needs according to consistency requirements. However, most of the works consider full data, popular data, and geo-distance-based replications by considering storage and replication cost. Moreover, the previous data popularity based-techniques rely on the historical and current data access frequencies for replication. In this article, the authors approach this problem from a distinct aspect while developing replication techniques for a multimedia data center management system which can dynamically adapt servers of a data center by considering popularity prediction in each data access location. Therefore, they first label data objects from one to ten to track access frequencies of data objects. Then, they use those data access frequencies from each location to predict the future access frequencies of data objects to determine the replication levels and locations to replicate the data objects, and store the related data objects to close storage servers. To show the efficiency of the proposed methods, the authors conduct an extensive simulation by using real data. The results show that the proposed method has an advantage over the previous works in terms of data availability and increases the data availability up to 50%. The proposed method and related analysis can assist multimedia service providers to enhance their service qualities.
APA, Harvard, Vancouver, ISO, and other styles
5

de Groot, Jasper H. B., Charly Walther, and Rob W. Holland. "A Fresh Look on Old Clothes: Laundry Smell Boosts Second-Hand Store Sales." Brain Sciences 12, no. 11 (2022): 1526. http://dx.doi.org/10.3390/brainsci12111526.

Full text
Abstract:
The clothing industry is one of the biggest polluters impacting the environment. Set in a sustainable environment, this study addresses whether certain ambient odors can influence the purchase of second-hand clothing. This study fulfilled three aims, increasing methodological, statistical, and theoretical rigor. First, replicating the finding that fresh laundry odor can boost purchasing behavior in a second-hand store—this time in a larger sample, using a fully counterbalanced design, in a pre-registered study. Second, assessing the effectiveness of another cleanliness priming control condition (citrus odor) unrelated to the products at hand, to test hypotheses from a hedonic vs. utilitarian model. Third, combining questionnaire data tapping into psychological processes with registered sales. The results (316 questionnaires, 6781 registered transactions) showed that fresh laundry odor significantly increased the amount of money spent by customers compared to the no smell condition, (replication) and compared to citrus odor (extension). Arguably, fresh laundry odor boosts the utilitarian value of the product at (second) hand by making it smell like non-used clothing, ultimately causing customers to purchase far greater amounts in this sustainable setting.
APA, Harvard, Vancouver, ISO, and other styles
6

Puranik, Sunil, Mahesh Barve, Swapnil Rodi, and Rajendra Patrikar. "FPGA-Based High-Throughput Key-Value Store Using Hashing and B-Tree for Securities Trading System." Electronics 12, no. 1 (2022): 183. http://dx.doi.org/10.3390/electronics12010183.

Full text
Abstract:
Field-Programmable Array (FPGA) technology is extensively used in Finance. This paper describes a high-throughput key-value store (KVS) for securities trading system applications using an FPGA. The design uses a combination of hashing and B-Tree techniques and supports a large number of keys (40 million) as required by the Trading System. We have used a novel technique of using buckets of different capacities to reduce the amount of Block-RAM (BRAM) and perform a high-speed lookup. The design uses high-bandwidth-memory (HBM), an On-chip memory available in Virtex Ultrascale+ FPGAs to support a large number of keys. Another feature of this design is the replication of the database and lookup logic to increase the overall throughput. By implementing multiple lookup engines in parallel and replicating the database, we could achieve high throughput (up to 6.32 million search operations/second) as specified by our client, which is a major stock exchange. The design has been implemented with a combination of Verilog and high-level-synthesis (HLS) flow to reduce the implementation time.
APA, Harvard, Vancouver, ISO, and other styles
7

Rambabu, D. "Enhancing replica management in a cloud environment using data mining based dynamic replication algorithm." i-manager’s Journal on Cloud Computing 9, no. 1 (2022): 17. http://dx.doi.org/10.26634/jcc.9.1.18566.

Full text
Abstract:
Cloud computing has recently gained a lot of popularity and attention from the research community. One of the many on-demand services that large-scale applications provide to cloud customers is storage, which accumulates more generated data and subsequently leads to the need for storage. Despite the fact that users can use the cloud to store and provide the type of storage that desire, it still takes a significant amount of time to store and retrieve data due to the large accumulation of data. Due to the need to improve data availability, response time, reliability, and migration costs, the current storage engine needs to be replicated across multiple sites. When copies are properly distributed, data replication speeds up execution. The biggest challenges in data replication are choosing which data to replicate, where to put it, how to manage replication, and how many replicas it needs. Therefore, various studies have been carried out on some data mining-based data replication systems to evaluate replication issues and manage cloud storage. In most cases, data replication in a data mining environment is done using data mining along with a replication algorithm and a data grid policy. In addition, this paper addresses replica management issues and proposes affordable data replication in the cloud that satisfies all Quality of Service (QoS) requirements.
APA, Harvard, Vancouver, ISO, and other styles
8

Alshammari, Mohammad M., Ali A. Alwan, Azlin Nordin, and Abedallah Zaid Abualkishik. "Data Backup and Recovery With a Minimum Replica Plan in a Multi-Cloud Environment." International Journal of Grid and High Performance Computing 12, no. 2 (2020): 102–20. http://dx.doi.org/10.4018/ijghpc.2020040106.

Full text
Abstract:
Cloud computing has become a desirable choice to store and share large amounts of data among several users. The two main concerns with cloud storage are data recovery and cost of storage. This article discusses the issue of data recovery in case of a disaster in a multi-cloud environment. This research proposes a preventive approach for data backup and recovery aiming at minimizing the number of replicas and ensuring high data reliability during disasters. This approach named Preventive Disaster Recovery Plan with Minimum Replica (PDRPMR) aims at reducing the number of replications in the cloud without compromising the data reliability. PDRPMR means preventive action checking of the availability of replicas and monitoring of denial of service attacks to maintain data reliability. Several experiments were conducted to evaluate the effectiveness of PDRPMR and the results demonstrated that the storage space used one-third to two-thirds compared to typical 3-replicas replication strategies.
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Gyuyeong, and Wonjun Lee. "In-network leaderless replication for distributed data stores." Proceedings of the VLDB Endowment 15, no. 7 (2022): 1337–49. http://dx.doi.org/10.14778/3523210.3523213.

Full text
Abstract:
Leaderless replication allows any replica to handle any type of request to achieve read scalability and high availability for distributed data stores. However, this entails burdensome coordination overhead of replication protocols, degrading write throughput. In addition, the data store still requires coordination for membership changes, making it hard to resolve server failures quickly. To this end, we present NetLR, a replicated data store architecture that supports high performance, fault tolerance, and linearizability simultaneously. The key idea of NetLR is moving the entire replication functions into the network by leveraging the switch as an on-path in-network replication orchestrator. Specifically, NetLR performs consistency-aware read scheduling, high-performance write coordination, and active fault adaptation in the network switch. Our in-network replication eliminates inter-replica coordination for writes and membership changes, providing high write performance and fast failure handling. NetLR can be implemented using programmable switches at a line rate with only 5.68% of additional memory usage. We implement a prototype of NetLR on an Intel Tofino switch and conduct extensive testbed experiments. Our evaluation results show that NetLR is the only solution that achieves high throughput and low latency and is robust to server failures.
APA, Harvard, Vancouver, ISO, and other styles
10

Heryanto, Ahmad, and Albert Albert. "Implementasi Sistem Database Terdistribusi Dengan Metode Multi-Master Database Replication." JURNAL MEDIA INFORMATIKA BUDIDARMA 3, no. 1 (2019): 30. http://dx.doi.org/10.30865/mib.v3i1.1098.

Full text
Abstract:
Databases are the main need for every computer application to store, process and modify data. One important problem faced in databases is the availability of adequate information technology infrastructure in managing and securing data contained in the database. Data stored on the database must have protection against threats and disturbances. Threats and disruptions can result from a variety of things, such as maintenance, data damage, and natural disasters. To anticipate data loss and damage, replication of the database system needs to be done. The replication mechanism used by researchers is multi-master replication. The replication technique is able to form a database cluster with replication time of fewer than 0.2 seconds.
APA, Harvard, Vancouver, ISO, and other styles
11

Rousseau, D. "Pre-purchase information search and consumer satisfaction: Replication and extension." South African Journal of Business Management 17, no. 4 (1986): 220–24. http://dx.doi.org/10.4102/sajbm.v17i4.1061.

Full text
Abstract:
In this paper the author examines consumer satisfaction with major household appliances and its determining factors. Hypotheses relating to pre-purchase information search and product satisfaction as well as previous satisfactory store experiences and subsequent repurchase behaviour are proposed and empirically tested using data from 55 consumers who patronized a large eastern Cape hypermarket. Results imply that product satisfaction is more related to market place variables than actual search behaviour. Repeat shopping intentions are associated with previous shopping experiences at the particular store which also contributes to product satisfaction. Marketing implications and future research directions are briefly discussed.
APA, Harvard, Vancouver, ISO, and other styles
12

Lin, Zhibin, and Dag Bennett. "Examining retail customer experience and the moderation effect of loyalty programmes." International Journal of Retail & Distribution Management 42, no. 10 (2014): 929–47. http://dx.doi.org/10.1108/ijrdm-11-2013-0208.

Full text
Abstract:
Purpose – The purpose of this paper is to examine the construct of retail customer experience (CE) and its links to satisfaction and loyalty; and to test whether loyalty programmes perform a moderating effect on those links. Design/methodology/approach – A variety of retail attributes are integrated to develop a holistic CE construct using formative measures, with four in-built, differentiated replication studies conducted in the supermarket and department store sectors in China. Findings – The empirical results confirm the model of CE’s impact on customer satisfaction and loyalty; but reveal that loyalty programmes perform an insignificant moderating role in enhancing the linkages in the model. Research limitations/implications – Further studies may examine whether our findings hold true for each individual loyalty programme. The paper calls for more studies based on multiple, in-built, differentiated replication studies and measures to encourage publication of negative empirical results so as to ensure empirical generalization and self-correction in the literature. Practical implications – Retail managers should focus attention on the design and delivery of great CE, without placing great reliance on loyalty programmes. Both cognitive and emotional attributes of retailing services should be considered for managing a holistic CE. Originality/value – The paper examines a model of CE with loyalty programme as a possible moderator; it uses formative measures of CE, multiple in-built replications and reports negative empirical results, which are critical to the development of scientific progress in retail management research.
APA, Harvard, Vancouver, ISO, and other styles
13

Rizwan Ali, Muhammad, Farooq Ahmad, Muhammad Hasanain Chaudary, et al. "Petri Net based modeling and analysis for improved resource utilization in cloud computing." PeerJ Computer Science 7 (February 8, 2021): e351. http://dx.doi.org/10.7717/peerj-cs.351.

Full text
Abstract:
The cloud is a shared pool of systems that provides multiple resources through the Internet, users can access a lot of computing power using their computer. However, with the strong migration rate of multiple applications towards the cloud, more disks and servers are required to store huge data. Most of the cloud storage service providers are replicating full copies of data over multiple data centers to ensure data availability. Further, the replication is not only a costly process but also a wastage of energy resources. Furthermore, erasure codes reduce the storage cost by splitting data in n chunks and storing these chunks into n + k different data centers, to tolerate k failures. Moreover, it also needs extra computation cost to regenerate the data object. Cache-A Replica On Modification (CAROM) is a hybrid file system that gets combined benefits from both the replication and erasure codes to reduce access latency and bandwidth consumption. However, in the literature, no formal analysis of CAROM is available which can validate its performance. To address this issue, this research firstly presents a colored Petri net based formal model of CAROM. The research proceeds by presenting a formal analysis and simulation to validate the performance of the proposed system. This paper contributes towards the utilization of resources in clouds by presenting a comprehensive formal analysis of CAROM.
APA, Harvard, Vancouver, ISO, and other styles
14

Hume, Louise, Christopher A. Dodd, and Nigel P. Grigg. "In-Store Selection of Wine—No Evidence for the Mediation of Music?" Perceptual and Motor Skills 96, no. 3_suppl (2003): 1252–54. http://dx.doi.org/10.2466/pms.2003.96.3c.1252.

Full text
Abstract:
435 visitors to a wine store were observed as part of a field-study replication wherein product purchase was measured under various music conditions. Contrary to previous findings from a much smaller study, the use of stereotypically representative music to mediate associated product choices showed no significant effects.
APA, Harvard, Vancouver, ISO, and other styles
15

PRABHAKAR, SUNIL, and RAHUL CHARI. "MINIMIZING LATENCY AND JITTER FOR LARGE-SCALE MULTIMEDIA REPOSITORIES THROUGH PREFIX CACHING." International Journal of Image and Graphics 03, no. 01 (2003): 95–117. http://dx.doi.org/10.1142/s0219467803000932.

Full text
Abstract:
Multimedia data poses challenges for efficient storage and retrieval due to its large size and playback timing requirements. For applications that store very large volumes of multimedia data, hierarchical storage offers a scalable and economical alternative to store data on magnetic disks. In a hierarchical storage architecture data is stored on a tape or optical disk based tertiary storage layer with the secondary storage disks serving as a cache or buffer. Due to the need for swapping media on drives, retrieving multimedia data from tertiary storage can potentially result in large delays before playback (startup latency) begins as well as during playback (jitter). In this paper we address the important problem of reducing startup latency and jitter for very large multimedia repositories. We propose that secondary storage should not be used as a cache in the traditional manner — instead, most of the secondary storage should be used to permanently store partial objects. Furthermore, replication is employed at the tertiary storage level to avoid expensive media switching. In particular, we show that by saving the initial segments of documents permanently on secondary storage, and replicating them on tertiary storage, startup latency can be significantly reduced. Since we are effectively reducing the amount of secondary storage available for buffering the data from tertiary storage, an increase in jitter may be expected. However, our results show that the technique also reduces jitter, in contrast to the expected behavior. Our technique exploits the pattern of data access. Advance knowledge of the access pattern is helpful, but not essential. Lack of this information or changes in access patterns are handled through adaptive techniques. Our study addresses both single- and multiple-user scenarios. Our results show that startup latency can be reduced by as much as 75% and jitter practically eliminated through the use of these techniques.
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Haibo, Heng Zhang, Mingkai Dong, et al. "Efficient and Available In-Memory KV-Store with Hybrid Erasure Coding and Replication." ACM Transactions on Storage 13, no. 3 (2017): 1–30. http://dx.doi.org/10.1145/3129900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

BOUABACHE, FATIHA, THOMAS HERAULT, GILLES FEDAK, and FRANCK CAPPELLO. "HIERARCHICAL REPLICATION TECHNIQUES TO ENSURE CHECKPOINT STORAGE RELIABILITY IN GRID ENVIRONMENT." Journal of Interconnection Networks 10, no. 04 (2009): 345–64. http://dx.doi.org/10.1142/s0219265909002613.

Full text
Abstract:
An efficient and reliable fault tolerance protocol plays a key role in High Performance Computing. Rollback recovery is the most common fault tolerance technique used in High Performance Computing and especially in MPI applications. This technique relies on the reliability of the checkpoint storage. Most of the rollback recovery protocols assume that the checkpoint servers machines are reliable. However, in a grid environment any unit can fail at any moment, including components used to connect different administrative domains. Such failures lead to the loss of a whole set of machines, including the more reliable machines used to store the checkpoints in this administrative domain. Thus it is not safe to rely on the high Mean Time Between Failures of specific machines to store the checkpoint images. This paper introduces a new coordinated checkpoint protocol, which tolerates checkpoint server failures and clusters failures, and ensures a checkpoint storage reliability in a grid environment. To provide this reliability the protocol is based on a replication process. We propose new hierarchical replication strategies that exploit the locality of checkpoint images in order to minimize inter-cluster communication. We evaluate the effectiveness of our two hierarchical replication strategies through simulations against several criteria such as topology and scalability.
APA, Harvard, Vancouver, ISO, and other styles
18

Sahaya Stalin, Jose G., and Christopher C. Seldev. "Minimize the Replication for Secure Cloud Data Storage Systems Using Error Correction Codes." Applied Mechanics and Materials 626 (August 2014): 26–31. http://dx.doi.org/10.4028/www.scientific.net/amm.626.26.

Full text
Abstract:
Cloud data centers should be flexible and available to the data forever. The replication method is used to achieve high availability and durability of cloud data center, if there is any failure to recover the messages from the cloud databases. The concern of this replication technology is that, the replica size is equal to the size of the original data object. When Error Detection Schemes were used, there is a reduction in the number of cloud distributed storage systems. The scope of this paper is to store the data efficiently in cloud data centers unlike the previous schemes which used erasure codes such as Reed Solomon codes only with a view to store data in datacenters. This paper proposes to encrypt the message using DES and to encode the message using Reed Solomon code before storing the message. Storing time is convincingly good in Reed Solomon code when compared with tornado code.
APA, Harvard, Vancouver, ISO, and other styles
19

Lee, Sekwon, Soujanya Ponnapalli, Sharad Singhal, Marcos K. Aguilera, Kimberly Keeton, and Vijay Chidambaram. "DINOMO." Proceedings of the VLDB Endowment 15, no. 13 (2022): 4023–37. http://dx.doi.org/10.14778/3565838.3565854.

Full text
Abstract:
We present Dinomo, a novel key-value store for disaggregated persistent memory (DPM). Dinomo is the first key-value store for DPM that simultaneously achieves high common-case performance, scalability, and lightweight online reconfiguration. We observe that previously proposed key-value stores for DPM had architectural limitations that prevent them from achieving all three goals simultaneously. Dinomo uses a novel combination of techniques such as ownership partitioning, disaggregated adaptive caching, selective replication, and lock-free and log-free indexing to achieve these goals. Compared to a state-of-the-art DPM key-value store, Dinomo achieves at least 3.8X better throughput at scale on various workloads and higher scalability, while providing fast reconfiguration.
APA, Harvard, Vancouver, ISO, and other styles
20

Zare, Hamidreza, Viveck Ramesh Cadambe, Bhuvan Urgaonkar, et al. "LEGOStore." Proceedings of the VLDB Endowment 15, no. 10 (2022): 2201–15. http://dx.doi.org/10.14778/3547305.3547323.

Full text
Abstract:
We design and implement LEGOStore, an erasure coding (EC) based linearizable data store over geo-distributed public cloud data centers (DCs). For such a data store, the confluence of the following factors opens up opportunities for EC to be latency-competitive with replication: (a) the necessity of communicating with remote DCs to tolerate entire DC failures and implement linearizability; and (b) the emergence of DCs near most large population centers. LEGOStore employs an optimization framework that, for a given object, carefully chooses among replication and EC, as well as among various DC placements to minimize overall costs. To handle workload dynamism, LEGOStore employs a novel agile reconfiguration protocol. Our evaluation using a LEGOStore prototype spanning 9 Google Cloud Platform DCs demonstrates the efficacy of our ideas. We observe cost savings ranging from moderate (5-20%) to significant (60%) over baselines representing the state of the art while meeting tail latency SLOs. Our reconfiguration protocol is able to transition key placements in 3 to 4 inter-DC RTTs (< 1s in our experiments), allowing for agile adaptation to dynamic conditions.
APA, Harvard, Vancouver, ISO, and other styles
21

Gretarsdottir, Solveig, Jeffrey Gulcher, Gudmar Thorleifsson, Augustine Kong, and Kari Stefansson. "Comment on the Phosphodiesterase 4D Replication Study by Bevan et al." Stroke 36, no. 9 (2005): 1824. http://dx.doi.org/10.1161/01.str.0000176497.94458.27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Zijian, and Chuqiao Xiao. "ER-Store: A Hybrid Storage Mechanism with Erasure Coding and Replication in Distributed Database Systems." Scientific Programming 2021 (September 10, 2021): 1–13. http://dx.doi.org/10.1155/2021/9910942.

Full text
Abstract:
In distributed database systems, as cluster scales grow, efficiency and availability become critical considerations. In a cluster, a common approach to high availability is using replication, but this is inefficient due to its low storage utilization. Erasure coding can provide data reliability while ensuring high storage utilization. However, due to the large number of coding and decoding operations required by the CPU, it is not suitable for some frequently updated data. In order to optimize the storage efficiency of the data in the distributed system without affecting the availability of the data, this paper proposes a data temperature recognition algorithm that can distinguish data tablets and divides data tablets into three types, cold, warm, and hot, according to the frequency of access. Combining three replicas and erasure coding technology, ER-store is proposed, a hybrid storage mechanism for different data types. At the same time, we combined the read-write separation architecture of the distributed database system to design the data temperature conversion cycle, which reduces the computational overhead caused by frequent updates of erasure coding technology. We have implemented this design on the CBase database system based on the read-write separation architecture, and the experimental results show that it can save 14.6%–18.3% of the storage space while meeting the efficient access performance of the system.
APA, Harvard, Vancouver, ISO, and other styles
23

Casciano, Jessica C., Nicholas J. Duchemin, R. Jason Lamontagne, Laura F. Steel, and Michael J. Bouchard. "Hepatitis B virus modulates store-operated calcium entry to enhance viral replication in primary hepatocytes." PLOS ONE 12, no. 2 (2017): e0168328. http://dx.doi.org/10.1371/journal.pone.0168328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Miloudi, Imad Eddine, Belabbas Yagoubi, Fatima Zohra Bellounar, and Taieb Chachou. "Adaptive replication strategy based on popular content in cloud computing." Multiagent and Grid Systems 17, no. 3 (2021): 273–95. http://dx.doi.org/10.3233/mgs-210354.

Full text
Abstract:
The cloud is an infrastructure that provides decentralized on-demand services. It allows consumers to pay only for the services they use. The consumer is the important entity in the cloud. The violation of the SLA contract between the consumer and the provider often leads to consequences because the service provider has to pay penalties. Data replication is emerging as an ideal solution to meet the new challenges of the cloud. This paper proposes a new replication strategy based on the popularity of data. This strategy adaptively selects the files to be replicated to improve the overall availability of data in the system, minimize query response time, and achieve the required quality of service. In addition, it dynamically determines the number of replicas to add and the best locations to store them. Experimental results show the effectiveness of the proposed strategy.
APA, Harvard, Vancouver, ISO, and other styles
25

Hasan, Siham, Meisam Sharifi Sani, Saeid Iranmanesh, Ali H. Al-Bayatti, Sarmadullah Khan, and Raad Raad. "Enhanced Message Replication Technique for DTN Routing Protocols." Sensors 23, no. 2 (2023): 922. http://dx.doi.org/10.3390/s23020922.

Full text
Abstract:
Delay-tolerant networks (DTNs) are networks where there is no immediate connection between the source and the destination. Instead, nodes in these networks use a store–carry–forward method to route traffic. However, approaches that rely on flooding the network with unlimited copies of messages may not be effective if network resources are limited. On the other hand, quota-based approaches are more resource-efficient but can have low delivery rates and high delivery delays. This paper introduces the Enhanced Message Replication Technique (EMRT), which dynamically adjusts the number of message replicas based on a node’s ability to quickly disseminate the message. This decision is based on factors such as current connections, encounter history, buffer size history, time-to-live values, and energy. The EMRT is applied to three different quota-based protocols: Spray and Wait, Encounter-Based Routing (EBR), and the Destination-Based Routing Protocol (DBRP). The simulation results show that applying the EMRT to these protocols improves the delivery ratio, overhead ratio, and latency average. For example, when combined with Spray and Wait, EBR, and DBRP, the delivery probability is improved by 13%, 8%, and 10%, respectively, while the latency average is reduced by 51%, 14%, and 13%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
26

Abkowitz, Janis L., Sandra N. Catlin, Rosemary Gale, Lambert Busque, and Peter Guttorp. "Estimating the Replication Rate of Hematopoietic Stem Cells (HSC) In Vivo." Blood 114, no. 22 (2009): 2522. http://dx.doi.org/10.1182/blood.v114.22.2522.2522.

Full text
Abstract:
Abstract Abstract 2522 Poster Board II-499 The replication kinetics of HSC in vivo is difficult to assess because HSC are infrequent, reside in marrow niches, and are regulated by extrinsic as well as intrinsic signals. Determining the replication (self-renewal) rate of human HSC in vivo is especially difficult because limiting dilution competitive transplantation studies and studies with cell division-sensitive markers are not feasible. Therefore, we developed three surrogate methods by extending observations in other species. These approaches use different data and different assumptions yet yield overlapping estimates and together suggest that human HSC replicate on average once per 40 weeks. This is thus much slower than replication rates of HSC in mouse (∼ once per 2.5 wks), cat (∼ once per 8.3 -10 wks) and nonhuman primate (∼ once per 25 wks) (reviewed in 1). Specifically, we analyzed the drift in the X-chromosome phenotype of blood cells from 1219 females (ages 18 -100, mean 56±22; Montreal cohort) assuming that HSC expressing X-alleles from one parent might divide slightly faster than HSC expressing X-alleles from the other parent and that over time this subtle growth advantage would lead to the phenotypic skewing of HSC and progeny cells. We calculated that the drift that occurs with aging in the X-chromosome phenotype of granulocytes in female Safari cats (F1 offspring of Geoffroyi (G) x domestic (d) cats) can be explained by a 5% difference in the replication rates of HSC expressing G vs. d X-chromosomes. Knowing that G and d cats evolved independently for > 9 million yrs, we simulated human hematopoiesis using a Markovian description of HSC differentiation and the single constraint that the differences in the replication rates of HSC expressing paternal vs. maternal-derived X-chromosomes in individual human females (150,000 yrs of genetic distance) would be less than 5%. For each replication rate (λ), we generated 100 sets of 1219 virtual females (the replication rates of HSC expressing maternal and paternal X-chromosomes in each individual were randomly drawn from a distribution with median λ and variance as observed in Safari cats) and determined if the pattern of X-chromosome phenotype in their blood cells with aging resembled the Montreal data. If yes, λ was included as a possible human HSC replication rate and if no, it was excluded. Using this approach, we defined plausible values for λ, then confirmed the results by a comparable analysis of second dataset (London cohort; 117 females, ages 18–96, mean 66±24). Remarkably, this value (1 per 40 wks; range 1 per 25–50 wks) is also consistent with estimates derived by two independent methods: analysis of granulocyte telomere length with aging2 and application of the concept that the total number of HSC divisions during a mammal's lifetime is evolutionarily conserved (data not shown). We next demonstrated that the estimate was reasonable by simulating human marrow transplantation. When 100 HSC are transplanted (corresponds to 1.9 ×108 marrow cells (MC)/kg, 70 kg recipient), all virtual recipients engraft, consistent with the clinical recommendation that >2–3 × 108 MC/kg be transplanted. Also, when 20 HSC are transplanted (3.9 ×107 MC/kg), graft failure is common (occurs in 50% of simulations). Of interest, these virtual recipients run out of short-term repopulating cells (STRC) within 30 wks, though HSC persist. There are several clinically relevant corollaries to this latter observation. First, children (small size, fewer mature blood cells) tolerate transplantations of fewer HSCs better than adults because few STRC can produce safe numbers of granulocytes, red cells and platelets. Second, supplementing transplantations with infusions of multipotent progenitor cells, such as present in cytokine or Notch ligand Delta1-expanded cord blood, should be an efficacious method to insure adequate hematopoiesis when the numbers of HSC are low and STRC are especially low. The data also raise the possibility that human marrow failure syndromes such as aplastic anemia and myelodysplasia could result from defective HSC commitment or defective expansion of differentiating STRC clones, and not only from the destruction or depletion of HSC. These events may be difficult to study in mice where simulations predict graft failure occurs from HSC depletion (data not shown), justifying the need for larger animal (cat, dog, primate) or human investigation. 1 Blood 110:1806,2007. 2Exp Hematol 32:1014,2004. Disclosures: No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
27

Thalij, Saadi Hamad, and Veli Hakkoymaz. "Multiobjective Glowworm Swarm Optimization-Based Dynamic Replication Algorithm for Real-Time Distributed Databases." Scientific Programming 2018 (December 4, 2018): 1–16. http://dx.doi.org/10.1155/2018/2724692.

Full text
Abstract:
Distributed systems offer resources to be accessed geographically for large-scale data requests of different users. In many cases, replication of the vital data files and storing their replica in multiple locations accessible to the requesting clients is vital in improving the data availability, reliability, security, and reduction of the execution time. It is important that real-time distributed databases maintain the consistency constraints and also guarantee the time constraints required by the client requests. However, when the size of the distributed system increases, the user access time also tends to increase, which in turn increases the vitality of the replica placement. Thus, the primary issues that emerge are deciding upon an optimal replication number and identifying perfect locations to store the replicated data. These open challenges have been considered in this study, which turns to develop a dynamic data replication algorithm for real-time distributed databases using a multiobjective glowworm swarm optimization (MGSO) strategy. The proposed algorithm adapts the random patterns of the read-write requests and employs a dynamic window mechanism for replication. It also models the replica number and placement problem as a multiobjective optimization problem and utilizes MGSO for resolving it. The cost models are presented to ensure the time constraint satisfaction in servicing user requests. The performance of the MGSO dynamic data replication algorithm has been studied using competitive analysis, and the results show the efficiency of the proposed algorithm for the distributed databases.
APA, Harvard, Vancouver, ISO, and other styles
28

Mao, Hong Xia. "Research on Security and Reliability Technologies in Cloud Storage." Applied Mechanics and Materials 496-500 (January 2014): 1905–8. http://dx.doi.org/10.4028/www.scientific.net/amm.496-500.1905.

Full text
Abstract:
Cloud storage has so many advantages that it has been an ideal way to store data. So, the technical questions of security and reliability should be researched. Access control and encryption are used for security of data storage. Redundancy technology which can improve the data reliability in cloud storage includes replication technology and erasure code. Under the specific experimental environment, the erasure code has an absolute advantage in fault tolerance and storage capacity.
APA, Harvard, Vancouver, ISO, and other styles
29

Maya Citra. "Influence of Product Stock, Location and Atmosphere Shop Against the Purchase Decision at Indomaret SM Raja Street Deblod Sundoro High Cliff City." International Journal of Community Service (IJCS) 2, no. 2 (2021): 22–36. http://dx.doi.org/10.55299/ijcs.v2i2.218.

Full text
Abstract:
This study aims to determine the effect of product stock, location and store atmosphere on purchasing decisions at Indomaret SM Raja Jl. Deblod Sundoro, Tebing Tinggi City. The effect that we want to know is the direct or indirect effect. This type of research is a replication and development of similar previous studies but with different objects, variables, and periods. This study used a sample of 96 respondents. Sampling using the Cochran formula technique. The analytical tool used is multiple linear analysis using the SPSS 25 program. From the results of this test it can be concluded that there is a significant effect between Product Stock (X1) on Purchase Decision (Y), there is no significant effect between Location (X2) on Decision Purchasing (Y), there is a significant influence between Store Atmosphere (X3) on Purchase Decisions (Y).
APA, Harvard, Vancouver, ISO, and other styles
30

Zmaranda, Doina R., Cristian I. Moisi, Cornelia A. Győrödi, Robert Ş. Győrödi, and Livia Bandici. "An Analysis of the Performance and Configuration Features of MySQL Document Store and Elasticsearch as an Alternative Backend in a Data Replication Solution." Applied Sciences 11, no. 24 (2021): 11590. http://dx.doi.org/10.3390/app112411590.

Full text
Abstract:
In recent years, with the increase in the volume and complexity of data, choosing a suitable database for storing huge amounts of data is not easy, because it must consider aspects such as manageability, scalability, and extensibility. Nowadays, the NoSQL databases have gained immense popularity for their efficiency in managing such datasets compared to relational databases. However, relational databases also exhibit some advantages in certain circumstances, therefore many applications use a combined approach: relational and non-relational. This paper performs a comparative evaluation of two popular open-source DBMSs: MySQL Document Store and Elasticsearch as non-relational DBMSs; this comparison is based on a detailed analysis of CRUD operations for different amounts of data showing how the databases could be modeled and used in an application. A case-study application was developed for this purpose in Java programming language and Spring framework using for data storage both relational MySQL and non-relational Elasticsearch and MySQL Document Store. To model the real situation encountered in several developed applications that use both relational and non-relational databases, a data replication solution that imports data from the primary relational MySQL database into Elasticsearch and MySQL Document Store as possible alternatives for more efficient data search was proposed and implemented.
APA, Harvard, Vancouver, ISO, and other styles
31

Nurwanti, Ratri, Dian Putri Permatasari, Ika Fitria, Zakki Munawar Ahmad, and Yohana Dwivi Anggraini. "Efek Stres terhadap False Memory Recall dan Recognition." Suksma: Jurnal Psikologi Universitas Sanata Dharma 3, no. 2 (2022): 60–73. http://dx.doi.org/10.24071/suksma.v3i2.5188.

Full text
Abstract:
Memory is often thought of as a video recorder that can record and store events precisely as they occur. Whereas in addition to being constructive, memory is also reconstructive, which means that memory can change due to certain conditions, resulting in false memories. The effect of stress on false memory was tested in this between-subject design experiment. Participants in this study (N = 38) were divided into two conditions through a random assignment process, control conditions (N = 27) and stress or experiment conditions (N = 11). We used a modified Trier Social Stress Test-Group to induce stress and Deese-Roediger-McDermott Paradigm to measure false memory. The independent sample t-test showed that there was no significant difference on false memory recall and false memory recognition between participants in the experiment condition and participants in the control condition. This indicated that stress did not affect false memory. The implications of this finding are the importance of replicating similar studies investigating stress induction in various stages of memory processing and forms of stress induction to produce a more precise understanding of the stress and false memory mechanism.
APA, Harvard, Vancouver, ISO, and other styles
32

Sutormin, Dmitry A., Alina Kh Galivondzhyan, Alexander V. Polkhovskiy, Sofia O. Kamalyan, Konstantin V. Severinov, and Svetlana A. Dubiley. "Diversity and Functions of Type II Topoisomerases." Acta Naturae 13, no. 1 (2021): 59–75. http://dx.doi.org/10.32607/actanaturae.11058.

Full text
Abstract:
The DNA double helix provides a simple and elegant way to store and copy genetic information. However, the processes requiring the DNA helix strands separation, such as transcription and replication, induce a topological side-effect supercoiling of the molecule. Topoisomerases comprise a specific group of enzymes that disentangle the topological challenges associated with DNA supercoiling. They relax DNA supercoils and resolve catenanes and knots. Here, we review the catalytic cycles, evolution, diversity, and functional roles of type II topoisomerases in organisms from all domains of life, as well as viruses and other mobile genetic elements.
APA, Harvard, Vancouver, ISO, and other styles
33

Gaunt, Eleanor R., Winsome Cheung, James E. Richards, Andrew Lever, and Ulrich Desselberger. "Inhibition of rotavirus replication by downregulation of fatty acid synthesis." Journal of General Virology 94, no. 6 (2013): 1310–17. http://dx.doi.org/10.1099/vir.0.050146-0.

Full text
Abstract:
Recently the recruitment of lipid droplets (LDs) to sites of rotavirus (RV) replication was reported. LDs are polymorphic organelles that store triacylglycerols, cholesterol and cholesterol esters. The neutral fats are derived from palmitoyl-CoA, synthesized via the fatty acid biosynthetic pathway. RV-infected cells were treated with chemical inhibitors of the fatty acid biosynthetic pathway, and the effects on viral replication kinetics were assessed. Treatment with compound C75, an inhibitor of the fatty acid synthase enzyme complex (FASN), reduced RV infectivity 3.2-fold (P = 0.07) and modestly reduced viral RNA synthesis (1.2-fold). Acting earlier in the fatty acid synthesis pathway, TOFA [5-(Tetradecyloxy)-2-furoic acid] inhibits the enzyme acetyl-CoA carboxylase 1 (ACC1). TOFA reduced the infectivity of progeny RV 31-fold and viral RNA production 6-fold. The effect of TOFA on RV infectivity and RNA replication was dose-dependent, and infectivity was reduced by administering TOFA up to 4 h post-infection. Co-treatment of RV-infected cells with C75 and TOFA synergistically reduced viral infectivity. Knockdown by siRNA of FASN and ACC1 produced findings similar to those observed by inhibiting these proteins with the chemical compounds. Inhibition of fatty acid synthesis using a range of approaches uniformly had a more marked impact on viral infectivity than on viral RNA yield, inferring a role for LDs in virus assembly and/or egress. Specific inhibitors of fatty acid metabolism may help pinpoint the critical structural and biochemical features of LDs that are essential for RV replication, and facilitate the development of antiviral therapies.
APA, Harvard, Vancouver, ISO, and other styles
34

Mishra, Garima, Lavi S. Bigman, and Yaakov Levy. "ssDNA diffuses along replication protein A via a reptation mechanism." Nucleic Acids Research 48, no. 4 (2020): 1701–14. http://dx.doi.org/10.1093/nar/gkz1202.

Full text
Abstract:
Abstract Replication protein A (RPA) plays a critical role in all eukaryotic DNA processing involving single-stranded DNA (ssDNA). Contrary to the notion that RPA provides solely inert protection to transiently formed ssDNA, the RPA–ssDNA complex acts as a dynamic DNA processing unit. Here, we studied the diffusion of RPA along 60 nt ssDNA using a coarse-grained model in which the ssDNA–RPA interface was modeled by both aromatic and electrostatic interactions. Our study provides direct evidence of bulge formation during the diffusion of ssDNA along RPA. Bulges can form at a few sites along the interface and store 1–7 nt of ssDNA whose release, upon bulge dissolution, leads to propagation of ssDNA diffusion. These findings thus support the reptation mechanism, which involves bulge formation linked to the aromatic interactions, whose short range nature reduces cooperativity in ssDNA diffusion. Greater cooperativity and a larger diffusion coefficient for ssDNA diffusion along RPA are observed for RPA variants with weaker aromatic interactions and for interfaces homogenously stabilized by electrostatic interactions. ssDNA propagation in the latter instance is characterized by lower probabilities of bulge formation; thus, it may fit the sliding-without-bulge model better than the reptation model. Thus, the reptation mechanism allows ssDNA mobility despite the extensive and high affinity interface of RPA with ssDNA. The short-range aromatic interactions support bulge formation while the long-range electrostatic interactions support the release of the stored excess ssDNA in the bulge and thus the overall diffusion.
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Su Zhen, Liu Wei, and Zhi Juan Du. "A Approach of Credibility-Based Trust Management Service in Cloud Environments." Applied Mechanics and Materials 278-280 (January 2013): 1771–78. http://dx.doi.org/10.4028/www.scientific.net/amm.278-280.1771.

Full text
Abstract:
Trust management is one of the most challenging issues in the emerging cloud computing. A distributed way is adopted to manage and store trust feedbacks data. We propose a credibility-based trust management model that not only distinguishes between credible trust feedbacks, but also has the ability to detect the malicious trust feedbacks from attackers .On this basis, we also present a replication determination model that dynamically decides the optimal replica number of the trust management service so that the trust management service can be always maintained at a desired availability level. The research results have been validated by experimental results.
APA, Harvard, Vancouver, ISO, and other styles
36

Dong, Siying, Andrew Kryczka, Yanqin Jin, and Michael Stumm. "RocksDB: Evolution of Development Priorities in a Key-value Store Serving Large-scale Applications." ACM Transactions on Storage 17, no. 4 (2021): 1–32. http://dx.doi.org/10.1145/3483840.

Full text
Abstract:
This article is an eight-year retrospective on development priorities for RocksDB, a key-value store developed at Facebook that targets large-scale distributed systems and that is optimized for Solid State Drives (SSDs). We describe how the priorities evolved over time as a result of hardware trends and extensive experiences running RocksDB at scale in production at a number of organizations: from optimizing write amplification, to space amplification, to CPU utilization. We describe lessons from running large-scale applications, including that resource allocation needs to be managed across different RocksDB instances, that data formats need to remain backward- and forward-compatible to allow incremental software rollouts, and that appropriate support for database replication and backups are needed. Lessons from failure handling taught us that data corruption errors needed to be detected earlier and that data integrity protection mechanisms are needed at every layer of the system. We describe improvements to the key-value interface. We describe a number of efforts that in retrospect proved to be misguided. Finally, we describe a number of open problems that could benefit from future research.
APA, Harvard, Vancouver, ISO, and other styles
37

Lee, Jinsu, and Eunji Lee. "Concerto: Dynamic Processor Scaling for Distributed Data Systems with Replication." Applied Sciences 11, no. 12 (2021): 5731. http://dx.doi.org/10.3390/app11125731.

Full text
Abstract:
A surge of interest in data-intensive computing has led to a drastic increase in the demand for data centers. Given this growing popularity, data centers are becoming a primary contributor to the increased consumption of energy worldwide. To mitigate this problem, this paper revisits DVFS (Dynamic Voltage Frequency Scaling), a well-known technique to reduce the energy usage of processors, from the viewpoint of distributed systems. Distributed data systems typically adopt a replication facility to provide high availability and short latency. In this type of architecture, the replicas are maintained in an asynchronous manner, while the master synchronously operates via user requests. Based on this relaxation constraint of replica, we present a novel DVFS technique called Concerto, which intentionally scales down the frequency of processors operating for the replicas. This mechanism can achieve considerable energy savings without an increase in the user-perceived latency. We implemented Concerto on Redis 6.0.1, a commercial-level distributed key-value store, demonstrating that all associated performance issues were resolved. To prevent a delay in read queries assigned to the replicas, we offload the independent part of the read operation to the fast-running thread. We also empirically demonstrate that the decreased performance of the replica does not cause an increase of the replication lag because the inherent load unbalance between the master and replica hides the increased latency of the replica. Performance evaluations with micro and real-world benchmarks show that Redis saves 32% on average and up to 51% of energy with Concerto under various workloads, with minor performance losses in the replicas. Despite numerous studies of the energy saving in data centers, to the best of our best knowledge, Concerto is the first approach that considers clock-speed scaling at the aggregate level, exploiting heterogeneous performance constraints across data nodes.
APA, Harvard, Vancouver, ISO, and other styles
38

Grandey, Alicia A., Lori S. Goldberg, and S. Douglas Pugh. "Why and When do Stores With Satisfied Employees Have Satisfied Customers?" Journal of Service Research 14, no. 4 (2011): 397–409. http://dx.doi.org/10.1177/1094670511410304.

Full text
Abstract:
Stores with more satisfied employees also have greater customer satisfaction (CS). Two theoretical mechanisms have been employed to explain why: affective transfer (i.e., emotional contagion) and performance motivation (i.e., extra-effort service behaviors). The authors provide a constructive replication of these relationships, while also arguing for an important boundary condition: store busyness. The authors suggest that in busy stores, employee attitudes (a) are less likely to be emotionally expressed by employees and “caught” by customers, and (b) are less likely to emerge as extra-effort performance, compared to slow stores. In a survey study of 328 warehouse-style retail stores, with multisource and time-separated data and controlling for contextual features, the authors support both direct affective transfer and indirect effects via an objective performance measure (i.e., speed of response to customers' requests for help). However, these associations depended on store busyness: store employee satisfaction had less influence on CS and service responsiveness in busy stores compared to slower stores. The results suggest several practical implications. For example, interventions targeting employee morale will have a greater effect on customer reactions in “less successful” stores with fewer sales transactions, while busy stores will see more benefit from interventions targeting other factors.
APA, Harvard, Vancouver, ISO, and other styles
39

Berkennou, Ahmed, Ghalem Belalem, and Said Limam. "A replication and migration strategy on the hierarchical architecture in the fog computing environment." Multiagent and Grid Systems 16, no. 3 (2020): 291–307. http://dx.doi.org/10.3233/mgs-200333.

Full text
Abstract:
Connecting objects have increasingly become popular in recent years, leading to the connection of more than 50 billion objects by the end of 2020. This large number of objects will generate a huge amount of data that is currently being processed and stored in the cloud. Fog Computing presents a promising solution to the problems of high latency and huge network traffic encountered in the cloud. As Fog’s infrastructures are dense, heterogeneous and geo-distributed, managing the data in order to satisfy users demand in such context is very complicated. In this work, we propose a data management strategy called ‘RMS-HaFC’ in which we consider the characteristics of Fog Computing environment. To do so, we proposed a hierarchical multi-layer model, on which we designed a migration and replication strategy based on data popularity. These strategies duplicate files dynamically and store them in different locations to improve the response time of users requests and minimize the system energy consumption without loading network usage. The strategy was evaluated using the iFogSim simulator and the experimental results obtained are very promising.
APA, Harvard, Vancouver, ISO, and other styles
40

Leorda, Ana, Svetlana Garaeva, and Valentina Ciochina. "Impact of COVID-19 infection on the gastrointestinal tract." Bulletin of the Academy of Sciences of Moldova. Life Sciences, no. 1 (343) (January 2022): 38–43. http://dx.doi.org/10.52388/1857-064x.2021.1.05.

Full text
Abstract:
The intestine is the target organ for infection with the SARS-CoV-2 coronavirus. The intestinal epithelium supports the replication of SARS-CoV-2. Blockade of the ACE-2 receptors in SARS-CoV-2 leads to impaired absorption of amino acids in the small intestine, which leads to depletion of the amino acid store. Mechanisms of damage to the pancreas include direct cytopathic effects of SARS-CoV-2 or indirect systemic inflammatory and immune-mediated cellular responses leading to organ damage or secondary enzymatic abnormalities. Gastrointestinal symptoms can occur earlier than other COVID-19 symptoms, and early testing can facilitate diagnosis and treatment before the disease becomes severe.
APA, Harvard, Vancouver, ISO, and other styles
41

Ziyu, Liu, and Zhao Lixia. "Game Analysis on the Influence of Participants’ Psychology on Value Co-Creation in Community E-Commerce Platform Supply Chain." Computational Intelligence and Neuroscience 2022 (July 14, 2022): 1–17. http://dx.doi.org/10.1155/2022/4684068.

Full text
Abstract:
As e-commerce continues to develop, online shopping is becoming more and more popular, and community e-commerce platform emerges as the times require. To make the participants in the supply chain of community e-commerce platform play a greater role in value creation, the supply chain of community e-commerce platform can get more benefits. This paper studies the psychology of all participants in the supply chain of community e-commerce platform, so as to establish the value creation mechanism of community e-commerce platform. Through the research, it is found that by studying the psychology of the two main participants in the supply chain of community-based online store, community e-commerce platform and community leader, and considering whether they will do their best to participate in value co-creation, a two-party evolutionary game model is constructed. By calculating the dynamic replication equation, the influence of the psychology of the participants in the supply chain of the community-based online store on the supply chain revenue of the community-based online store is obtained, and the evolutionary trend of both parties in the game is simulated by Matlab. It is concluded that the key factors that affect the co-creation of the value of a community-based online store supply chain are the proportion of increased income, the cost of participating in co-creation of value, and the purchasing power of consumers brought by participating with both parties. Finally, some suggestions and countermeasures are put forward through the analysis of simulation results.
APA, Harvard, Vancouver, ISO, and other styles
42

Scussel, Fernanda Bueno Cardoso, Francisco Pujol Filho, Martin De La Martinière Petroll, and Cláudio Damacena. "Me chama que eu vou: o efeito das vitrines no comportamento de compra do consumidor brasileiro no varejo de moda." Revista de Administração da UFSM 13, no. 3 (2020): 566–86. http://dx.doi.org/10.5902/1983465933189.

Full text
Abstract:
From a theoretical reference on consumer behavior in traditional retail, this article explores the role of window display as a predictor of store entry and purchase decision. The study carried out in a Brazilian capital is a replication of Sen, Block and Chandran’s (2002) seminal research and had 364 participants. Data were analyzed by structural equation modeling. Results indicate fashion trend, price and sale information as the main drivers of the decision to enter the store, which is the most relevant factor for consumers when deciding to buy a product. These findings confirm not only the relationship between window display and decision making, but they reveal window display as an antecedent of consumer decision making in traditional retail. Additionally, our findings show the sensitivity of Brazilian consumer to novelty and fashion trends, as well as their predisposition to enter stores and purchase products when they are on sale or when payment alternatives are announced. Regarding academic contributions, this is a first step in understanding retail shopping experience, being window display the staring point. Thus, studies on store atmosphere, visual identity construction as well as price strategies can benefit from this content. Considering also the growth of e-commerce and the need to attract consumers to the physical stores, these results contribute to the studies on retail business strategies, suggesting the relationship between the effect provoked by window displays and business performance.
APA, Harvard, Vancouver, ISO, and other styles
43

Muhammad Irham, Lalu, Wan-Hsuan Chou, Yu-Shiuan Wang, et al. "Evaluation for the Genetic Association between Store-Operated Calcium Influx Pathway (STIM1 and ORAI1) and Human Hepatocellular Carcinoma in Patients with Chronic Hepatitis B Infection." Biology 9, no. 11 (2020): 388. http://dx.doi.org/10.3390/biology9110388.

Full text
Abstract:
Hepatocellular carcinoma (HCC) often develops from chronic hepatitis B (CHB) through replication of hepatitis B virus (HBV) infection. Calcium (Ca2+) signaling plays an essential role in HBV replication. Store-operated calcium (SOC) channels are a major pathway of Ca2+ entry into non-excitable cells such as immune cells and cancer cells. The basic components of SOC signaling include the STIM1 and ORAI1 genes. However, the roles of STIM1 and ORAI1 in HBV-mediated HCC are still unclear. Thus, long-term follow-up of HBV cohort was carried out in this study. This study recruited 3631 patients with chronic hepatitis (345 patients with HCC, 3286 patients without HCC) in a Taiwanese population. Genetic variants of the STIM1 and ORAI1 genes were detected using an Axiom CHB1 genome-wide array. Clinical associations of 40 polymorphisms were analyzed. Three of the STIM1 single-nucleotide polymorphisms (SNPs) (rs6578418, rs7116520, and rs11030472) and one SNP of ORAI1 (rs6486795) showed a trend of being associated with HCC disease (p < 0.05). However, after correction for multiple testing, none of the SNPs reached a significant level (q > 0.05); in contrast, neither STIM1 nor ORAI1 showed a significant association with HCC progression in CHB patients. Functional studies by both total internal reflection fluorescence images and transwell migration assay indicated the critical roles of SOC-mediated signaling in HCC migration. In conclusion, we reported a weak correlation between STIM1/ORAI1 polymorphisms and the risk of HCC progression in CHB patients.
APA, Harvard, Vancouver, ISO, and other styles
44

Miyatake, Shin-Ichi, Hiroyuki Yukawa, Hiroki Toda, Norihiro Matsuoka, Rei Takahashi, and Nobuo Hashimoto. "Inhibition of Rat Vascular Smooth Muscle Cell Proliferation In Vitro and In Vivo by Recombinant Replication-Competent Herpes Simplex Virus." Stroke 30, no. 11 (1999): 2431–39. http://dx.doi.org/10.1161/01.str.30.11.2431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Guroob, Abdo H. "EA2-IMDG: Efficient Approach of Using an In-Memory Data Grid to Improve the Performance of Replication and Scheduling in Grid Environment Systems." Computation 11, no. 3 (2023): 65. http://dx.doi.org/10.3390/computation11030065.

Full text
Abstract:
This paper proposes a novel approach, EA2-IMDG (Efficient Approach of Using an In-Memory Data Grid) to improve the performance of replication and scheduling in grid environment systems. Grid environments are widely used for distributed computing, but they are often faced with the challenge of high data access latency and poor scalability. By utilizing an in-memory data grid (IMDG), the aim is to significantly reduce the data access latency and improve the resource utilization of the system. The approach uses the IMDG to store data in RAM, instead of on disk, allowing for faster data retrieval and processing. The IMDG is used to distribute data across multiple nodes, which helps to reduce the risk of data bottlenecks and improve the scalability of the system. To evaluate the proposed approach, a series of experiments were conducted, and its performance was compared with two baseline approaches: a centralized database and a centralized file system. The results of the experiments show that the EA2-IMDG approach improves the performance of replication and scheduling tasks by up to 90% in terms of data access latency and 50% in terms of resource utilization, respectively. These results suggest that the EA2-IMDG approach is a promising solution for improving the performance of grid environment systems.
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Xiuguo. "Data Sets Replicas Placements Strategy from Cost-Effective View in the Cloud." Scientific Programming 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/1496714.

Full text
Abstract:
Replication technology is commonly used to improve data availability and reduce data access latency in the cloud storage system by providing users with different replicas of the same service. Most current approaches largely focus on system performance improvement, neglecting management cost in deciding replicas number and their store places, which cause great financial burden for cloud users because the cost for replicas storage and consistency maintenance may lead to high overhead with the number of new replicas increased in a pay-as-you-go paradigm. In this paper, towards achieving the approximate minimum data sets management cost benchmark in a practical manner, we propose a replicas placements strategy from cost-effective view with the premise that system performance meets requirements. Firstly, we design data sets management cost models, including storage cost and transfer cost. Secondly, we use the access frequency and the average response time to decide which data set should be replicated. Then, the method of calculating replicas’ number and their store places with minimum management cost is proposed based on location problem graph. Both the theoretical analysis and simulations have shown that the proposed strategy offers the benefits of lower management cost with fewer replicas.
APA, Harvard, Vancouver, ISO, and other styles
47

Taghfir, Dimas Bima, Syaiful Anwar, and Budi Adi Kristanto. "Kualitas benih dan pertumbuhan bibit cabai (Capsicum frutescens l.) pada perlakuan suhu dan wadah penyimpanan yang berbeda." Journal of Agro Complex 2, no. 2 (2018): 137. http://dx.doi.org/10.14710/joac.2.2.137-147.

Full text
Abstract:
Setting the temperature of the storage space of seeds and storage containers will greatly affect the quality of the seed. The aim of this research was to study the effect of temperature treatment, storage container and their interaction on seed quality and seedling growth of chilli. The study was conducted in Jetis Village and Laboratory of Plant Physiology and Breeding, Faculty of Animal and Agricultural Sciences, Diponegoro University from January to June 2017. The study was conducted using nesting experiments on the basis of Completely Randomized Design (RAL). The first factor was Storage Temperature (R1 = Room Temperature 24-29 oC, R2 = Refrigerator Temperature 5oC) and second factor was storage container nested at storage temperature that was (P1 = Alumunium foil, P2 = Paper and P3 = Plastic). Each treatment had 5 replications and each replication consisted of 100 seeds, so there were 30 experimental units. The data were analyzed by using analysis of variance (ANOVA) and continued with test of HSD (Honesty Significant Difference) 5% significance level. The results showed that the storage temperature (5oC) temperature increased the temperature and the seed vigor index was larger than the room temperature (28oC), the aluminum foil packaging produced the maximum growth potential and germination rate was higher than the plastic and paper packaging but there was no different growth rate and index vigor. Low storage space temperatures (5oC) can not maintain maximum seed quality where the 4 parameters were still below the standard quality of the seed. Seeds stored in low temperature (5oC) rooms producedfresh weight and dry weight of seedlings larger than high temperature (28oC), but the number of leaves, seed height and hypothetical vigor index were not significantly different. The aluminum foil packaging producedfresh weight and dry weight of seeds higher than plastic and paper packaging. However, the number of leaves, the height of seed and the hypothetical vigor index were notsignificantly different. Keywords : Temperature, container store, seeds quality, seedling growth, chilli.
APA, Harvard, Vancouver, ISO, and other styles
48

Miao, Jia Jia, Guo You Chen, Kai Du, and Xue Lin Fang. "High Performance and High Availability Archived Stream System for Big Data." Applied Mechanics and Materials 263-266 (December 2012): 2792–95. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.2792.

Full text
Abstract:
The increasing number of applications for large data, such as Web search engines, we need to have high availability 7*24 tracking, storage and analysis of a large number of real-time user access logs. Traditional common trading application high availability solution, always not efficient enough to store this high-rate only inserted into the archive stream. This paper presents an integrated approach to save of this archive of data streams in a database cluster and rapid recovery. This method is based on a simple replication protocol and high performance data loading and query strategy. Experimental results show that our approach can reach efficient data loading and query and get shorter recovery time than the traditional database cluster recovery methods.
APA, Harvard, Vancouver, ISO, and other styles
49

Khobragade, Shrutika, Rohini Bhosale, and Rahul Jiwahe. "High Security Mechanism: Fragmentation and Replication in the Cloud with Auto Update in the System." APTIKOM Journal on Computer Science and Information Technologies 5, no. 2 (2020): 54–59. http://dx.doi.org/10.34306/csit.v5i2.138.

Full text
Abstract:
Cloud Computing makes immense use of internet to store a huge amount of data. Cloud computing provides high quality service with low cost and scalability with less requirement of hardware and software management. Security plays a vital role in cloud as data is handled by third party hence security is the biggest concern to matter. This proposed mechanism focuses on the security issues on the cloud. As the file is stored at a particular location which might get affected due to attack and will lost the data. So, in this proposed work instead of storing a complete file at a particular location, the file is divided into fragments and each fragment is stored at various locations. Fragments are more secured by providing the hash key to each fragment. This mechanism will not reveal all the information regarding a particular file even after successful attack. Here, the replication of fragments is also generated with strong authentication process using key generation. The auto update of a fragment or any file is also done here. The concept of auto update of filles is done where a file or a fragment can be updated online. Instead of downloading the whole file, a fragment can be downloaded to update. More time is saved using this methodology.
APA, Harvard, Vancouver, ISO, and other styles
50

Khobragade, Shrutika, Rohini Bhosale, and Rahul Jiwane. "High security mechanism: fragmentation and replication in the cloud with auto update in the system." Computer Science and Information Technologies 1, no. 2 (2020): 78–83. http://dx.doi.org/10.11591/csit.v1i2.p78-83.

Full text
Abstract:
Cloud Computing makes immense use of internet to store a huge amount of data. Cloud computing provides high quality service with low cost and scalability with less requirement of hardware and software management. Security plays a vital role in cloud as data is handled by third party hence security is the biggest concern to matter. This proposed mechanism focuses on the security issues on the cloud. As the file is stored at a particular location which might get affected due to attack and will lost the data. So, in this proposed work instead of storing a complete file at a particular location, the file is divided into fragments and each fragment is stored at various locations. Fragments are more secured by providing the hash key to each fragment. This mechanism will not reveal all the information regarding a particular file even after successful attack. Here, the replication of fragments is also generated with strong authentication process using key generation. The auto update of a fragment or any file is also done here. The concept of auto update of filles is done where a file or a fragment can be updated online. Instead of downloading the whole file, a fragment can be downloaded to update. More time is saved using this methodology.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!